List of Archived Posts

2023 Newsgroup Postings (05/31 - 08/07)

Some 3033 (and other) Trivia
Some 3033 (and other) Trivia
H.R.3746 - Fiscal Responsibility Act of 2023
IBM Supercomputer
Some 3090 & channel related trivia:
Success at Midway
Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution
Ingenious librarians
Merck Sues Over Law Empowering Medicare to Negotiate With Drugmakers
IBM MVS RAS
IBM MVS RAS
Ingenious librarians
Ingenious librarians
Has the Pentagon Learned from the F-35 Debacle?
Rent/Leased IBM 360
Boeing 747
Grace Hopper (& Ann Hardy)
IBM MVS RAS
IBM 3880 Disk Controller
IBM 3880 Disk Controller
IBM 360/195
'Tax Scam': Republicans Follow Debt Ceiling Fight by Proposing Tax Cuts for Wealthy
IBM 360/195
VM370, SMP, HONE
VM370, SMP, HONE
VM370, SMP, HONE
Is America Already in a Civil War?
IBM 3278
Ingenious librarians
IBM 3278
Profit Inflation Is Real
IBM 3278
IBM 370/195
IBM 360s
IBM Mainframe Emulation
Eastern Airlines 370/195 System/One
"The Big One" (IBM 3033)
Online Forums and Information
IBM 3278
IBM Santa Teresa Lab
Oldest Internet email addresses
The Architect of the Radical Right
The China Debt Trap Lie that Won't Die
AI Scale-up
AI Scale-up
wallpaper updater
wallpaper updater
AI Scale-up
UNMITIGATED RISK
Computer Speed Gains Erased By Modern Software
UNMITIGATED RISK
Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses
AI Scale-up
Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses
AI Scale-up
How the Net Was Won
How the Net Was Won
How the Net Was Won
The United States' Financial Quandary: ZIRP's Only Exit Path Is a Crash
How the Net Was Won
CICS Product 54yrs old today
mainframe bus wars, How much space did the 68000 registers take up?
Online Before The Cloud
CICS Product 54yrs old today
IBM System/360, 1964
CICS Product 54yrs old today
IBM System/360, 1964
IBM System/360, 1964
Tax Avoidance
Fortran, IBM 1130
Who Employs Your Doctor? Increasingly, a Private Equity Firm
IBM System/360, 1964
Some Virtual Machine History
Some Virtual Machine History
Some Virtual Machine History
And it's gone -- The true cost of interrupts
Private, Public, Internet
Private Equity Firms Increasingly Held Responsible for Healthcare Fraud
IBM System/360, 1964
IBM System/360 JCL
Airline Reservation System
Taligent and Pink
Taligent and Pink
Typing, Keyboards, Computers
The Control Data 6600
Airline Reservation System
5th flr Multics & 4th flr science center
545tech sq, 3rd, 4th, & 5th flrs
545tech sq, 3rd, 4th, & 5th flrs
IBM Remains America's Worst Big Tech Company
IBM 3083
IBM 3083
The Admissions Game
The IBM mainframe: How it runs and why it survives
The IBM mainframe: How it runs and why it survives
370/148 masthead/banner
The IBM mainframe: How it runs and why it survives
The IBM mainframe: How it runs and why it survives
IBM DASD, Virtual Memory
Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'
IBM 3083
Operating System/360
Typing, Keyboards, Computers
The IBM mainframe: How it runs and why it survives
DASD, Channel and I/O long winded trivia
DASD, Channel and I/O long winded trivia
DASD, Channel and I/O long winded trivia
DASD, Channel and I/O long winded trivia
DASD, Channel and I/O long winded trivia
DASD, Channel and I/O long winded trivia
APL
3380 Capacity compared to 1TB micro-SD
3380 Capacity compared to 1TB micro-SD
VM370
DASD, Channel and I/O long winded trivia
ADVENTURE
APL
DASD, Channel and I/O long winded trivia
Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

Some 3033 (and other) Trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some 3033 (and other) Trivia
Date: 31 May, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia

Note: MVS had gotten enormously bloated by 3033 ... a couple major problems.

Needed more than 16mbyte of real memory; came up with hack to attach up to 64mbytes of real memory ... although no instructions could directly address >16mbyte. Co-opted two unused bits in the 16bit page table entry, prefix them to the 12bit 4kbyte real page number ... extending to 14bit 4kbyte real page number ... or up to 64mbytes. Allowing application virtual memory to reside above the 16mbyte real memory line ... and use 370 CCW IDALs to do I/O above the line. There was an issue somethings application page above the 16mbyte-line had to be brought below the 16mbyte-line ... they were planning using IDAL to write a 4k page (above the line) and then IDAL to read it back in (below the line). I gave them a hack manipulating two page table entries, where one specified the page above the line and one specified the page below the line ... and use MVCL with the two virtual addressees to move the contents from above the line to below the line (not requiring I/O for the move).

a couple previous posts
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2010.html#84 locate mode, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters

Original MVT->VS2 was SVS, effectively MVT running in 16mbyte virtual address ... minimum changes to MVT to achieve similar effect of running MVT running in CP67 16mbyte virtual machine ... w/o CP67. The biggest piece of code was that applications passing channel programs to EXCP/SVC0 now had virtual addresses, and a copy of the passed channel programs had to be built with real addresses (similar to what CP67 had to do running MVT in virtual machine). A decade+ ago I was asked to track down decision to make all 370s, virtual memory machines. I found person reporting to executive that mad decision. Basically MVT storage management was so bad, that regions had to be specified four times larger than actually used ... as a result typical 1mbyte 370/165 would only run four concurrent regions, insufficient to keep 165 busy (and justified). Biggest amount of code involved hacking CP67 CCWTRANS into EXCP.
https://www.garlic.com/~lynn/2011d.html#73

Then for move from SVS to MVS, each application was given its own 16mbyte virtual address space ... however in order for MVS kernel to access application memory (fetching parameters, storing any results, etc), a shared 8mbyte image of the MVS kernel was mapped into every application virtual address space (leaving only 8mbyte for application). Then because subsystems were also moved out into their own 16mbyte virtual address space, they needed method for subsystems to retrieve parameters & returned results ... and the CSA (common segment area) was invented, shared 1mbyte segment in every virtual address space, where applications would build subsystem parameter lists (and results could be returned).

Space requirements for CSA was somewhat proportional to both the number of subsystems and the number of concurrently executing applications ... quickly exceeding 1mbyte ... and the "common segment area" becomes the "common system area" ... by 3033 time CSAs were 5-6mbyes (leaving 2mbytes for applications) and many places threatening CSA to grow to 8mbytes, leaving nothing for application (8mbyte kernel area plus 8mbyte CSA ... takes up the whole application 16mbyte). One of the guys out on the west coast worked out a way to retrofit a subset of XA "access registers" to 3033 as "dual-address space mode" ... subsystems could access data directly in application address space w/o needing CSA (and reducing prospect of CSA ballooning to 8mbyte).

a couple other posts mentioning the CSA ballooning to 8mbytes
https://www.garlic.com/~lynn/2017i.html#48 64 bit addressing into the future
https://www.garlic.com/~lynn/2017e.html#40 Mainframe Family tree and chronology 2
https://www.garlic.com/~lynn/2015b.html#46 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2014d.html#62 Difference between MVS and z / OS systems

Turns out that there was a series of minor 3033 processor tweaks for MVS .. that Amdahl considered were designed to keep MVS from running on non-IBM clones. I had gotten authorization to give talks on how (138/148, 4331/4341) ECPS was implemented at monthly (user group) BAYBUNCH meetings (hosted at Stanford SLAC). Amdahl people regularly attended. After meetings, they explained how they had done MACROCODE ... sort of a 370 instruction subset that ran in microcode mode ... which (initially) enormously simplified their ability to respond quickly to the series of trivial changes appearing for 3033 ... and at the time, they were in process of implementing "HYPERVISOR" (basically virtual machine subset, not needing VM370, to partition a machine into multiple logical systems ... IBM wasn't able to respond until 1988 for 3090).

some posts mentioning Amdahl macrocode, hypervisor, 3090, lpar, pr/sm
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021e.html#67 Amdahl
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017i.html#54 Here's a horrifying thought for all you management types
https://www.garlic.com/~lynn/2017i.html#43 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2013f.html#68 Linear search vs. Binary search
https://www.garlic.com/~lynn/2010m.html#74 z millicode: where does it reside?
https://www.garlic.com/~lynn/2007n.html#96 some questions about System z PR/SM
https://www.garlic.com/~lynn/2007b.html#1 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language

--
virtualization experience starting Jan1968, online at home since Mar1970

Some 3033 (and other) Trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some 3033 (and other) Trivia
Date: 01 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia

Other 3033 trivia: bldg15 product test then got engineering 4341 ... and in jan1979, I was coned into doing 4341 benchmark for national lab that was looking at getting 70 for compute farm (sort of the tsunami leading edge of cluster supercomputing, much of same technology was also used for cloud megadatacenters). POK was starting to feel threatened, cluster of 4341s had higher aggregate throughput than 3033, much cheaper, and a lot less power, environmentals and floor space. Somebody in Endicott claimed that head of POK was so threaten that convinced corporate to cut allocation of critical 4341 manufacturing component in half. Old email from somebody in POK about what were highend mainframe going to do with the onslaught of cluster computing. megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

Old email about 85/165/168/3033/trout lineage

Date: 04/23/81 09:57:42
To: wheeler

your ramblings concerning the corp(se?) showed up in my reader yesterday. like all good net people, i passed them along to 3 other people. like rabbits interesting things seem to multiply on the net. many of us here in pok experience the sort of feelings your mail seems so burdened by: the company, from our point of view, is out of control. i think the word will reach higher only when the almighty $$$ impact starts to hit. but maybe it never will. its hard to imagine one stuffed company president saying to another (our) stuffed company president i think i'll buy from those inovative freaks down the street. '(i am not defending the mess that surrounds us, just trying to understand why only some of us seem to see it).

bob tomasulo and dave anderson, the two poeple responsible for the model 91 and the (incredible but killed) hawk project, just left pok for the new stc computer company. management reaction: when dave told them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they tried to keep bob by telling him he shouldn't go (the reward system in pok could be a subject of long correspondence). when he left, the management position was 'he wasn't doing anything anyway. '

in some sense true. but we haven't built an interesting high-speed machine in 10 years. look at the 85/165/168/3033/trout. all the same machine with treaks here and there. and the hordes continue to sweep in with faster and faster machines. true, endicott plans to bring the low/middle into the current high-end arena, but then where is the high-end product development?


... snip ... top of post, old email index

... this points out 3033 started out being 168 logic remapped to 20% faster chips ... and 3081 was totally unrelated line, including enormous increase in circuits compared to the processing throughput (likely motivating TCMs, being able to package all those extra circuits in reasonable volume)
http://www.jfsowa.com/computer/memo125.htm

and once 3033 was out the door the 3033 processor engineers start on trout/3090

trivia: email refers to I was blamed for online computer conferencing (precursor to modern social media) in the late 70s and early 80s (folklore is when corporate executive committee was told, 5of6 wanted to fire me) on the internal network ... it really ballooned when I distributed trip report visiting Jim Gray at Tandem.
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

H.R.3746 - Fiscal Responsibility Act of 2023

From: Lynn Wheeler <lynn@garlic.com>
Subject: H.R.3746 - Fiscal Responsibility Act of 2023
Date: 02 June, 2023
Blog: Facebook
H.R.3746 - Fiscal Responsibility Act of 2023
https://www.congress.gov/bill/118th-congress/house-bill/3746

Note the 90s "fiscal resonsibility act"
https://en.wikipedia.org/wiki/PAYGO

required that spending couldn't exceed tax revenue. In 2002, it was on its way to eliminating all federal debt when congress allows it to lapse. In 2005, the Federal Comptroller General was including in speeches that there was nobody in congress that knew how to do middle school arithmetic for how badly they were savaging the budget. In 2010, a CBO report was that 2003-2009, tax revenue was cut by $6T and spending increase by $6T, for a $12T gap compared to the fiscal responsibility act; sort of confluence of the Federal Reserve and TBTF wanted huge debt, special interests wanted huge tax cut, and military-industrial(-congressional) complex wanted huge spending increase (first time congress cuts taxes to not pay for two wars). The next administration was able to reduce the annual deficit ... but wasn't able to totally eliminate the debt increases. Then the following administration returned to further enormous tax cuts for the wealthy and spending increases.

The first major legislation after letting (90s) fiscal responsibility act lapse was medicare part-D. US Comptroller General said that Part-D was an enormous gift to the pharmaceutical industry, and would come to be a $40T unfunded mandate, dwarfing all other budget items. CBS 60mins had segment on the 18 Republicans responsible for getting Part-D passed ... just before the final vote, they insert a one sentence change prohibiting competitive bidding. 60mins showed drugs under Medicare Part-D that were three times the cost of identical drugs with competitive bidding. They also found that within 12months of the vote, all 18 had resigned and were on drug industry payrolls.

Spring 2009, IRS press talked about going after 52,000 wealthy Americans that owed $400B on trillions illegally stashed overseas (over and above the new tax loopholes for legally stashing trillions overseas). Then the new speaker of the house, in a DC weekend news radio segment said that he was cutting the IRS budget for the department responsible for recovering the $400B (plus fines and penalties); and was putting the new Republican "tea party" darlings on the finance committee because members of that committee get the largest lobbying contributions.

trivia: there was later news of billions in fines for the financial institutions that facilitated the illegal tax evasion (jokes about just small cost of doing business for a criminal enterprise) ... but nothing about the 52,000 wealthy Americans (or the $400B).

posts mentioning (90s) fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
medicare part-D posts
https://www.garlic.com/~lynn/submisc.html#medicare.part-d
tax fraud, tax evasion, tax loopholes, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Supercomputer

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Supercomputer
Date: 03 June, 2023
Blog: Facebook
... not exactly Minnesota specific ...

Date: 08/10/89 11:30:02
From: ...
To: ...
cc: wheeler

....

We shared a common observation that even though the applications have a small percentage of the data which is shared, inappropriate memory subsystem design will often introduce artificial conflicts/ delays which in order to speed up will incurr hardware/software costs. In the 3090 world, we have seen the investment IBM made in the area of cache coherence, the scalability limitation and the software investment to adapt the 3090 to the E/S world (ESSL, ...). IBMers often stated that the reason for the performance problem IBM has in the E/S market is because the applications are written for the Cray architecture and the design point of the VF is for balance vector and scalar load. One can sit and argue philosophy all days and nights. However one can not escape facts. The fact is Cray has set standard in the E/S world just like IBM has in the commercial arena. Not only Cray but other E/S machines today are designed to meet the existing applications. If we want to succeed in this market we can not create new/nostandard features.


... snip ... top of post, old email index

I had done stuff off&on with 3090 engineers back to before they started on 3033 (once the 3033 was out the door, they started on "trout" aka 3090). They once panned the whole 3090VF idea ... claiming that vector was invented because floating point was so slow ... that memory bus could keep dozen or more floating point units fed. However they claimed that they optimized floating point unit on 3090 that scalar ran as fast as memory bus ... so having a dozen or more floating point units running simultaneously would be memory bus constrained. A further issue was 3090 had 3mbyte/sec channels while Cray (and others in E/S world) had HIPPI channels (100mbyte/sec) with high performance disk arrays.

trivia: starting in early 80s I had HSDT project, T1 and faster computer links and working with NSF director ... was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things, and eventually NSF releases an RFP (in part base on what we already had running). One was T1 link between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston that had a boat load of Floating Point Systems boxes (which included 40mbyte/sec disk array ... as part of being able to keep the processor fed).
https://en.wikipedia.org/wiki/Floating_Point_Systems

Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.

... snip ...

Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

I had been ridiculing the initial deployment which called for T1 (links) ... but they had deployed 440kbit/sec links and then to give the appearance of compliance had T1 trunks with telco multiplexors running multiple links/trunk. Possibly to shutdown my ridiculing, I was asked to be the "red team" for the T3 upgrade and couple dozen people from half dozen labs around the world were the blue team. For the executive review, I presented first ... and then a few minutes into the blue team presentation, the executive pounded on the table and said he would lay down in front of a garbage truck before he allowed any but the blue team proposal to go forward.

Later in the 80s we had the HA/6000 project that started out for the NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) when we start working on technical/scientific cluster scale-up with national labs (including porting national lab supercomputer filesystems to HA/CMP) and commercial cluster scale-up with major RDBMS vendors (Sybase, Informix, Ingres, Oracle, simplified because they had VAXCluster support in the same source base with their Unix support). Around the end of January 1992, technical/scientific cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*), and we were told we couldn't work on anything with more than four processor (we leave IBM a few months later). old email also mentions 2000-4000 "cluster" configuration
https://www.garlic.com/~lynn/2013h.html#email920113
old email mentions ha/cmp for 1-800 system
https://www.garlic.com/~lynn/2013h.html#email920114
old post about Hester/Ellison meeting mid-jan92, saying that 16-way would be available mid92 and 128-system ye92
https://www.garlic.com/~lynn/95.html#13
17Feb92 press, HA/CMP scale-up has been transferred and announced Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
https://www.garlic.com/~lynn/2001n.html#6000clusters1
11May92 press
https://www.garlic.com/~lynn/2001n.html#6000clusters2
some more press, 15Jaun92
https://www.garlic.com/~lynn/2001n.html#6000clusters3

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

some other 3090, Cray, & TCP/IP trivia

The communication group was fiercely fighting off client/server, distributed computing, TCP/IP, etc (trying to preserve their dumb terminal paradigm and install base) and tried to block mainframe TCP/IP announce and ship. Some customers got that reversed and they then changed their tactics and said that since the communication group has corporate responsibility for everything that crossed datacenter walls, mainframe TCP/IP had to be released through them. What shipped, got 44kbytes/sec aggregate throughput using nearly whole 3090 processor. They then ported it to MVS by simulating VM370 diagnose (making it even more compute intensive). I then did RFC1044 support and in some tuning testing at Cray Research between IBM 4341 and Cray, got sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). trivia: we took off from SFO (for Minneapolis & Chippewa) just a few minutes before the earthquake struck.

rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

Some 3090 & channel related trivia:

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some 3090 & channel related trivia:
Date: 04 June, 2023
Blog: Facebook
Some 3090 & channel related trivia:
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

Had done various work with the 3033 processor & channel engineers ... once the 3033 was out the door, the processor engineers start on trout (aka 3090). The 3880 disk controller has been upgraded with special hardware to handle 3380 3mbyte/sec transfer. The trout engineers configured system with enough channels for targeted system throughhput assume the 3880 was 3830 with 3mbyte/sec capability. However 3830 had fast horizontal microcode processor while the 3880 had a slow vertical microcode processor for everything (lots of handshaking protocol chatter) else (but actual data transfer), which significantly drives up channel busy. They realize that they will have to significantly increase number of channels (to achieve targeted throughput). The channel increase required an additional TCM ... and the 3090 group semi-facetious say they will bill the 3880 group for the increase in 3090 manufacturing cost. Marketing eventually respins the increase in number of channels as a wonderful I/O machine (as opposed needed to offset the significant increase in 3880 channel busy).

Now in 1980, STL (santa teresa lab, since renamed silicon valley lab) was bursting at the seams and they were moving 300 from the IMS group to offsite bldg (with dataprocessing back to STL datacenter. They tried "remote 3270" support, but found the human factors totally unacceptable. I get con'ed into doing channel extender support allowing channel attached 3270 controllers to be placed at the offsite bldg, with no perceptible difference in human factors between offsite and inside STL. Then the hardware vendor tries to get IBM to release my support, but there is a group in POK (playing with some serial stuff) that gets it veto'ed (afraid if it was in the market, it would make it hardware to get their stuff released).

Roll forward to 3090s (announce 12Feb1985) have been in customer shops for a year and the 3090 product administrator tracks me down ... 3090 EREP data showed 20 channel check errors (across all customers machines for the year). There was an industry service that collected customer EREP data and published reports ... and 3090 was appearing worse than some of the clone makers. The 3090 channels were design such that there should have been only 3-5 errors (aggregate, not 20). It turns out the vendor had duplicated my implementation .... using CSW "channel check" when there had been unrecoverable transmission error. The product administrator wants me to get the vendor to change their implementation. I research the problem and it turns out that IFCC (interface controller check) effectively does the same error recovery/retry as CC ... change makes the 3090 product administrator happy.

In the early days of REX (before renamed REXX and released to customers) I wanted to demonstrate that it wasn't just another pretty scripting language. I chose to rewrite a large assembler code application (problem&dump analyser) in REX with objective of ten times the function and ten times the performance (some slight of hand from assembler to interpreted REX) working half-time over three months. Turns out I finish early, and so add a library of automated functions that look for known failure and problem signatures. I expected it would be released to customers, replacing the existing assemblerr implementation, but for various reasons it wasn't (it was in use by most internal datacenters and customer PSRs). I do get permission to give presentations at customer user group meetings and how I did the implementation ... and within a few months similar implementations start appearing at customer shops.

Later I get some email from the 3092 group ... 3090 service processor (started out as 4331 with highly customized VM370 release 6 with all service screens implemented in CMS IOS3270, but morphs into a pair of 4361s, note it also requires a pair of 3370FBA, even for MVS shops that have never had FBA support) ... wanting to include "DUMPRX"
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

Now in 1988, the IBM branch office asks me to help LLNL (national lab) get some serial stuff they have been playing with standardized ... which quickly becomes Fibre Channel Standard (FCS, including some stuff I had done in 1980; initially full-duplex, 1gbit/sec transmission, 2gbit/sec aggregate, 200mbyte/sec. Then in 1990 (after a decade), the serial stuff POK has been playing with is shipped with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers start playing with FCS and define a heavy weight protocol that drastically cuts the native throughput, which is eventually announced as FICON. The latest published numbers I have found is z196 peak I/O benchmark where it gets 2M IOPS using 104 FICON (running over 104 FCS). About the same time a (native) FCS is announced for E5-2600 blades claiming over million IOPS (i.e. two native FCS with higher throughput than 104 FCS running FICON).

... also comments in other recent posts
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023d.html#3 IBM Supercomputer

getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
DUMPX posts
https://www.garlic.com/~lynn/submain.html#dumprx
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel

other Fibre Channel:

Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch
Fibre Channel electrical interface
https://en.wikipedia.org/wiki/Fibre_Channel_electrical_interface
Fibre Channel over Ethernet
https://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet

some posts mentioning 3090 Channel Check EREP numbers:
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2016f.html#5 More IBM DASD RAS discussion
https://www.garlic.com/~lynn/2012l.html#25 X86 server
https://www.garlic.com/~lynn/2012e.html#54 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2010m.html#83 3270 Emulator Software
https://www.garlic.com/~lynn/2010i.html#2 Processors stall on OLTP workloads about half the time--almost no matter what you do
https://www.garlic.com/~lynn/2009l.html#60 ISPF Counter
https://www.garlic.com/~lynn/2008g.html#10 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2007f.html#53 Is computer history taught now?
https://www.garlic.com/~lynn/2006y.html#43 Remote Tape drives
https://www.garlic.com/~lynn/2006n.html#35 The very first text editor
https://www.garlic.com/~lynn/2006i.html#34 TOD clock discussion
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things

some other posts mention 3090 number of channels and 3880 channel busy
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#66 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2019.html#79 How many years ago?
https://www.garlic.com/~lynn/2019.html#58 Bureaucracy and Agile
https://www.garlic.com/~lynn/2019.html#51 3090/3880 trivia
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#71 PDP 11/40 system manual
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2018.html#0 Intrigued by IBM

some past posts mentioning z196 peak i/o benchmark
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023.html#89 IBM San Jose
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#113 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#89 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#72 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#26 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#18 3270 Trivia
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#66 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#57 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#69 IBM Bus&Tag Channels
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021k.html#109 Network Systems
https://www.garlic.com/~lynn/2021k.html#53 IBM Mainframe
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021j.html#75 IBM 3278
https://www.garlic.com/~lynn/2021j.html#3 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021f.html#93 IBM ESCON Experience
https://www.garlic.com/~lynn/2021f.html#42 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#68 Amdahl
https://www.garlic.com/~lynn/2021d.html#55 Cloud Computing
https://www.garlic.com/~lynn/2021c.html#71 What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2021b.html#64 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#56 IBM Quota
https://www.garlic.com/~lynn/2021.html#55 IBM Quota
https://www.garlic.com/~lynn/2021.html#4 3390 CKD Simulation

--
virtualization experience starting Jan1968, online at home since Mar1970

Success at Midway

From: Lynn Wheeler <lynn@garlic.com>
Subject: Success at Midway
Date: 04 June, 2023
Blog: Facebook
... reminded me of a number of discussions with John Boyd ....

Pacific Crucible: War at Sea in the Pacific, 1941-1942 (Vol. 1) (The Pacific War Trilogy)
https://www.amazon.com/Pacific-Crucible-War-Sea-1941-1942-ebook/dp/B005LW5JL2/
pg480/loc9399-9403:

Many men had contributed bits and pieces to the breaking of the Japanese code, including those stationed in Melbourne and Washington; but it was Joe Rochefort who had taken those bits and pieces and assembled them into an accurate mosaic. Rochefort had a rare genius for the art of sifting through masses of disparate and contradictory data. He drew liberally upon his knowledge and experience as a naval officer who had served long years at sea, and also upon his understanding of the Japanese language and culture.


pg481/loc9412-15:

But in June 1942, no one outside a privileged circle knew that the Japanese code had been broken--and even within that privileged circle, few were aware that Rochefort had drawn the right conclusions while his superiors in Washington had clung obdurately to the wrong ones. Rochefort may have been vindicated by events, but he was a marked man.


pg481/loc9415-21:

His enemies in Washington--the Redman brothers, Commander Wenger, and unidentified members of Admiral King's staff--apparently conspired to seize credit for the victory while diminishing Hypo's role. The conspirators, perfectly aware that strict wartime secrecy would abet their misconduct, did not merely shade the facts but directly lied. The Redmans falsely maintained that analysts at OP-20-G, rather than Hypo, had identified June 3 as the date of the initial attacks; they argued that Rochefort was unqualified to run Hypo and should be removed from the post, because he was " an ex-Japanese language student " who was "not technically trained in Naval Communications"; and they told everyone who would listen that " Pearl Harbor had missed the boat at the Battle of Midway but the Navy Department had saved the day."


pg482/loc9423-26:

The campaign succeeded. Both men were soon promoted, and the elder brother, Rear Admiral Joseph R. Redman, was awarded a Distinguished Service Medal. When Rochefort was put up for the same medal with Nimitz's strong endorsement, the application was denied. Rochefort was recalled to Washington and shunted into jobs in which his talents were wasted.


pg482/loc9426-29:

"I have given a great deal of thought to the Rochefort affair," Tom Dyer later wrote, "and I have been unwillingly forced to the conclusion that Rochefort committed the one unforgivable sin. To certain individuals of small mind and overweening ambition, there is no greater insult than to be proved wrong." Jasper Holmes made the same point a bit more poetically: "It was not the individual for whom the bell tolled but the navy [that] died a little."

... snip ...

previous posts mentioning Rochefort
https://www.garlic.com/~lynn/2022b.html#21 To Be Or To Do
https://www.garlic.com/~lynn/2017i.html#88 WW II cryptography
https://www.garlic.com/~lynn/2017i.html#86 WW II cryptography
https://www.garlic.com/~lynn/2017i.html#85 WW II cryptography

Boyd posts and web refs
https://www.garlic.com/~lynn/subboyd.html
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

--
virtualization experience starting Jan1968, online at home since Mar1970

Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution

From: Lynn Wheeler <lynn@garlic.com>
Subject: Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution
Date: 05 June, 2023
Blog: Facebook
Failed Expectations: A Deep Dive Into the Internet's 40 Years of Evolution
https://circleid.com/posts/20230524-failed-expectations-a-deep-dive-into-the-internets-40-years-of-evolution

a couple recent posts with some of the history:
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
comment in this (facebook public group) post
https://www.facebook.com/groups/internetoldfarts/posts/783290399984847/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
3-tier posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

--
virtualization experience starting Jan1968, online at home since Mar1970

Ingenious librarians

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Ingenious librarians
Date: 05 June, 2023
Blog: Facebook
Ingenious librarians. A group of 1970s campus librarians foresaw our world of distributed knowledge and research, and designed search tools for it
https://aeon.co/essays/the-1970s-librarians-who-revolutionised-the-challenge-of-search

Within a year of taking two credit hr intro to fortran/computers, the univ. hires me fulltime responsible for mainframe software (OS/360). Then the Univ. library gets an ONR (navy) grant to do an online catalog, part of the money goes for IBM 2321 ("datacell"). The univ. online catalog was also selected for one of the betatest sites for IBM's CICS product ... and helping debug CICS & online catalog added to my tasks.

Mid-90s, was asked to come in to look at NIH NLM ... and two of the programmers that had done NLM's IBM mainframe online catalog (about the same time the univ was doing theirs) were still there and we would kibitz about the old days (1960s).
https://www.nlm.nih.gov/

.... NLM story was that they created "grateful med" on apple2 around 1980. problem was NLM online catalog was so large that search could come up with hundreds of thousand of responses. Search terms around 5-7 would tend to be bi-modal, tens of thousands of responses or zero. Default for "grateful med" was to return the number of results (not the actual results) looking for the magic search that had more than zero but less than a hundred.

both univ & NLM implementation were done with BDAM

cics/bdam posts
https://www.garlic.com/~lynn/submain.html#cics

a few posts mentioning online catalog and NIH NLM:
https://www.garlic.com/~lynn/2022c.html#39 After IBM
https://www.garlic.com/~lynn/2022.html#38 IBM CICS
https://www.garlic.com/~lynn/2019c.html#28 CICS Turns 50 Monday, July 8
https://www.garlic.com/~lynn/2018c.html#13 Graph database on z/OS?
https://www.garlic.com/~lynn/2018b.html#54 Brain size of human ancestors evolved gradually over 3 million years
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2017f.html#34 The head of the Census Bureau just quit, and the consequences are huge
https://www.garlic.com/~lynn/2010j.html#73 IBM 3670 Brokerage Communications System
https://www.garlic.com/~lynn/2010e.html#9 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009q.html#25 Old datasearches
https://www.garlic.com/~lynn/2009o.html#38 U.S. house decommissions its last mainframe, saves $730,000
https://www.garlic.com/~lynn/2008l.html#80 Book: "Everyone Else Must Fail" --Larry Ellison and Oracle ???
https://www.garlic.com/~lynn/2006l.html#31 Google Architecture
https://www.garlic.com/~lynn/2004p.html#0 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases

we had been brought in to look doing some stuff with UMLS, got a CDROM to work with
http://www.nlm.nih.gov/research/umls/about_umls.html

some posts mentioning NIH NLM UMLS
https://www.garlic.com/~lynn/2022d.html#74 WAIS. Z39.50
https://www.garlic.com/~lynn/2022c.html#39 After IBM
https://www.garlic.com/~lynn/2018c.html#13 Graph database on z/OS?
https://www.garlic.com/~lynn/2018b.html#54 Brain size of human ancestors evolved gradually over 3 million years
https://www.garlic.com/~lynn/2017g.html#57 Stopping the Internet of noise
https://www.garlic.com/~lynn/2017f.html#34 The head of the Census Bureau just quit, and the consequences are huge
https://www.garlic.com/~lynn/2014d.html#55 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2008m.html#74 Speculation ONLY
https://www.garlic.com/~lynn/2005j.html#47 Where should the type information be?
https://www.garlic.com/~lynn/2005j.html#45 Where should the type information be?
https://www.garlic.com/~lynn/2005d.html#57 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004e.html#53 c.d.theory glossary (repost)
https://www.garlic.com/~lynn/aadsm15.htm#15 Resolving an identifier into a meaning

some DIALOG (search) history ... also some NIH NLM involvement, gone 404, but lives on at way back machine
https://web.archive.org/web/20050123104257/http://www.dialog.com/about/history/pioneers1.pdf
https://web.archive.org/web/20120911120037/http://www.dialog.com/about/history/pioneers2.pdf

some past posts
https://www.garlic.com/~lynn/2022g.html#73 Anyone knew or used the Dialog service back in the 80's?
https://www.garlic.com/~lynn/2022g.html#16 Early Internet
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2016g.html#64 The Forgotten World of BBS Door Games - Slideshow from PCMag.com
https://www.garlic.com/~lynn/2016d.html#36 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2016.html#28 1976 vs. 2016?
https://www.garlic.com/~lynn/2014e.html#39 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2009q.html#46 Old datasearches

--
virtualization experience starting Jan1968, online at home since Mar1970

Merck Sues Over Law Empowering Medicare to Negotiate With Drugmakers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Merck Sues Over Law Empowering Medicare to Negotiate With Drugmakers
Date: 06 June, 2023
Blog: Facebook
Merck Sues Over Law Empowering Medicare to Negotiate With Drugmakers. The company is heavily reliant on a cancer drug that could be targeted by a program intended to lower drug prices.
https://www.nytimes.com/2023/06/06/business/merck-medicare-drug-prices.html
Merck sues over Medicare price negotiations. The drug giant goes to court over the Biden administration's attempts to curb pharmaceutical costs.
https://www.washingtonpost.com/business/2023/06/06/merck-medicare-price-controls/
Merck sues government over drug price negotiation. Drugmaker seeks injunction against parts of last year's reconciliation law
https://rollcall.com/2023/06/06/merck-sues-government-over-drug-price-negotiation/
Merck sues federal government, calling plan to negotiate Medicare drug prices extortion
https://apnews.com/article/merck-lawsuit-medicare-drug-prices-179cca2e1b9319782683909ccca5d24a
Merck sues US government to halt Medicare drug price negotiation
https://www.reuters.com/business/healthcare-pharmaceuticals/merck-sues-us-government-halt-medicare-drug-price-negotiation-2023-06-06/

Note the 90s "fiscal resonsibility act"
https://en.wikipedia.org/wiki/PAYGO

required that spending couldn't exceed tax revenue. In 2002, it was on its way to eliminating all federal debt when congress allows it to lapse. In 2005, the Federal Comptroller General was including in speeches that there was nobody in congress that knew how to do middle school arithmetic for how badly they were savaging the budget. In 2010, a CBO report was that 2003-2009, tax revenue was cut by $6T and spending increase by $6T, for a $12T gap compared to the fiscal responsibility act; sort of confluence of the Federal Reserve and TBTF wanted huge debt, special interests wanted huge tax cut, and military-industrial(-congressional) complex wanted huge spending increase (first time congress cuts taxes to not pay for two wars). The next administration was able to reduce the annual deficit ... but wasn't able to totally eliminate the debt increases. Then the following administration returned to further enormous tax cuts for the wealthy and spending increases.

The first major legislation after letting (90s) fiscal responsibility act lapse was MEDICARE PART-D. US Comptroller General said that PART-D was an enormous gift to the pharmaceutical industry, and would come to be a $40T unfunded mandate, dwarfing all other budget items. CBS 60mins had segment on the 18 Republicans responsible for getting PART-D passed ... just before the final vote, they insert a one sentence change prohibiting competitive bidding (and blocking CBO distributing report on the effect of the last minute change). 60mins showed drugs under MEDICARE PART-D that were three times the cost of identical drugs with competitive bidding. They also found that within 12months of the vote, all 18 had resigned and were on drug industry payrolls.

MEDICARE PART-D posts
https://www.garlic.com/~lynn/submisc.html#medicare.part-d
(90s) fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM MVS RAS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM MVS RAS
Date: 06 June, 2023
Blog: Facebook
MVS RAS manager would get irate when I made jokes that "MVS Recovery" referring to "MVS Recovery" repeatedly covering up a problem, until there was no evidence left about what caused the problem.

I get sucked into working on 370 16-processor multiprocessor which everybody thought was great. Then somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) would have (effective) 16-way support. Then some of us were invited to never visit POK again, and the 3033 processor engineers were told to totally focus on 3033. POK doesn't ship a 16-processor machine until after the turn of the century, nearly 25yrs later.

I then transfer from the Science Center to San Jose Research and allowed to wander around silicon valley IBM and customer datacenters, including disk engineering (bldg14) and product test (bldg15) across the street. They were doing 7x24, prescheduled, stand-alone testing. They mentioned that they had tried MVS, but it had 15min MTBF (mean-time between failures) requiring manual re-IPL. I offer to rewrite input/output supervisor, making it bullet proof and never fail ... and enabling any amount of on-demand, concurrent testing (greatly improving productivity). I then do an internal-only research report on the effort, happening to mention MVS 15min MTBF ... bringing down the wrath of the MVS group on my head (informally told, even attempts to separate from the IBM company). Joke was somewhat on them, I was already being told I had no career, no promotions, and no raises ... for having offended various groups and executives.

A couple years later, 3880/3380 was getting ready to ship and FE had a suite of 57 emulated errors they thought be expected. For all errors, MVS still would fail (requiring manual re-IPL) and in 2/3rds of the cases leaving no indication of the failure cause ... old email reference
https://www.garlic.com/~lynn/2007.html#email801015

... I didn't feel sorry ... recent other IO/channel related posts
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

... other history (involving POK) ....

Charlie had invented Compare-And-Swap (name chosen because CAS are Charlie's initials) when he was working on CP/67 fine grain multiprocessor locking at the Cambridge Science Center. In meetings with 370 architecture owners trying to get CAS added to 370, it was initially rebuffed because they said that the MVT 360/65 multiprocessor group claimed that Test-And-Set" was sufficient. We were challenged to come up with CAS uses that weren't strictly multiprocessor, thus was born the uses by large multiprogramming applications (like DBMS) to synchronize/serialize operations w/o requiring disabled kernel (operating system services) calls (some of the examples still appear in "Principles Of Operation")

a decade+ ago, I got asked if I could track down the decision to add "virtual memory" support to all 370s. I found a person on the executives staff. Basically MVT storage management was so bad that MVT regions had to be specified four times larger than used, as a result, typical 1mbyte 370/165 were limited to running four concurrent regions concurrently (insufficient to keep machine busy and justified). Archived post with pieces of the email exchange
https://www.garlic.com/~lynn/2011d.html#73

initially, was MVT->SVS, very similar to running MVT in a CP67 16mbyte virtual machine ... w/o CP67 and pieces of code migrated to MVT. MVT had same CP67 problem with channel programs passed to EXCP/SVC0 which had virtual addresses (and channels required real addresses). CP67 "CCWTRANS" was hacked into EXCP to create copies of the passed channel programs, replacing virtual addresses with real.

In the first half of the 70, there was the Future System that was going to completely replace 370, and internal politics was killing off 370 efforts (claims that lack of new 370 gave the clone 370 makers their market foothold). Then when FS imploded, there was mad rush to get stuff back into the product pipelines, including kicking off the quick&dirty 3033&3081 efforts. The head of POK also convinces corporate to kill the VM370/CMS product, shutdown their development group and move everybody to POK for MVS/XA (claiming that otherwise MVS/XA wouldn't ship on time). Endicott eventually manages to acquire the VM370/CMS product mission, but has to reconstitute a development group from scratch.

Note in move from SVS to MVS, each application was given its own 16mbyte virtual address space ... however in order for MVS kernel to access application memory (fetching parameters, storing any results, etc), a shared 8mbyte image of the MVS kernel was mapped into every application virtual address space (leaving only 8mbyte for application). Then because subsystems were also moved out into their own 16mbyte virtual address space, they needed method for subsystems to retrieve parameters & returned results ... and the CSA (common segment area) was invented, shared 1mbyte segment in every virtual address space, where applications would build subsystem parameter lists (and results could be returned). Space requirements for CSA was somewhat proportional to both the number of subsystems and the number of concurrently executing applications ... quickly exceeding 1mbyte ... and the "common segment area" becomes the "common system area". By 3033 time, CSAs were 5-6mbyes (leaving 2mbytes for applications) and many places threatening CSA to grow to 8mbytes, leaving nothing for application (8mbyte kernel area plus 8mbyte CSA ... takes up the whole application 16mbyte). One of the guys out on the west coast worked out a way to retrofit a subset of XA "access registers" to 3033 as "dual-address space mode" ... subsystems could access data directly in application address space w/o needing CSA (and reducing prospect of CSA ballooning to 8mbyte, eliminating any ability for running customer applications).

Also in move from SVS to MVS, there was a song at SHARE ... leaking customers weren't converting to MVS and IBM was offering sales people $4K bonuses to get customers to migrate to MVS (there is another story involving problems gettin gcustomers to migrate to MVS/XA).
http://www.mxg.com/thebuttonman/boney.asp

Long winded account of IBM downhill slide .... starting with Learson's failure to block the bureaucrats, careerists, and MBAs destroying the Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/boyd-ibm-wild-duck-discussion-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/multi-modal-optimization-old-post-from-6yrs-ago-lynn-wheeler
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/

SMP, multiprocessor, tightly-coupled and/or Compare-And-Swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent 3033 &/or 3090 posts
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023d.html#3 IBM Supercomputer
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#106 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#104 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#98 Fortran

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM MVS RAS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM MVS RAS
Date: 07 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS

After FS implodes besides 370 16-processor effort, Endicott also sucked me into helping with ECPS for virgil/tully (138/148, also used for 4300s) ... old archived post with initial analysis
https://www.garlic.com/~lynn/94.html#21

In the early 80s, get permission to give presentations on how ECPS was done at monthly BAYBUNCH (user group meetings hosted by Stanford SLAC). Amdahl people would quiz me for additional details after meetings ... they also explained how they had done MACROCODE ... sort of a 370 instruction subset that ran in microcode mode ... which (initially) enormously simplified their ability to respond quickly to the series of trivial changes appearing for 3033 ... and at the time, they were in process of implementing "HYPERVISOR" (basically virtual machine subset, not needing VM370, to partition a machine into multiple logical systems ... IBM wasn't able to respond with PR/SM and LPAR until 1988 for 3090).

POK was finding that customers weren't converting to MVS/XA like they were suppose to (sort of like the earlier SVS->MVS migration) and Amdahl was doing better because customers could run MVS & MVS/XA concurrently. When POK had "killed" VM370 (and shutdown the development group), some of the people had done the VMTOOL in support of MVS/XA development, never intended to ship to customers. POK then decides to ship VMTOOL for aiding MVS/XA conversion ... as VM/MA and then VM/SF. Then POK had proposal for a couple hundred people group to bring VMMA/VMSF up to the feature, function and performance of VM370. Sort of Endicott's alternate was an internal SYSPROG in Rochester had added full 370/XA support to vm370 ... POK won.

some posts mentioning MACROCODE, hypervisor, PR/SM, and LPAR
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022g.html#58 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021k.html#106 IBM Future System
https://www.garlic.com/~lynn/2021j.html#4 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2021e.html#67 Amdahl
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2017i.html#54 Here's a horrifying thought for all you management types
https://www.garlic.com/~lynn/2017i.html#43 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017b.html#37 IBM LinuxONE Rockhopper
https://www.garlic.com/~lynn/2014j.html#19 DG Nova 1200 as console
https://www.garlic.com/~lynn/2014d.html#17 Write Inhibit
https://www.garlic.com/~lynn/2013f.html#68 Linear search vs. Binary search
https://www.garlic.com/~lynn/2010m.html#74 z millicode: where does it reside?
https://www.garlic.com/~lynn/2007n.html#96 some questions about System z PR/SM
https://www.garlic.com/~lynn/2007b.html#1 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2006p.html#42 old hypervisor email
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea

--
virtualization experience starting Jan1968, online at home since Mar1970

Ingenious librarians

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Ingenious librarians
Date: 08 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians

After transfering to San Jose Research ... did some work with Jim Gray and Vera Watson on the original SQL/relational (System/R) ... also involved in transfer of the technology to Endicott (under the "radar" while company was preoccupied with the next great DBMS "EAGLE") for SQL/DS (note when EAGLE implodes, then there is request for how fast could System/R be ported to MVS, which is eventually released as DB2, originally for decision-support *only*). Also, Los Gatos VLSI lab (bldg29) lets me have part of wing with offices and labs. In return, I help with various things. One was that they were doing a "Semantic Network" DBMS implementation with Sowa ... at the time with IBM STL
http://www.jfsowa.com/pubs/semnet.htm

After leaving IBM, I continued to dabble with semantic network implementations ... and was major reason brought into NLM for UMLS. NLM had a vendor contract to organize their medical knowledge taxonomy in an RDBMS ... which was taking 18months elapsed time to add 6 months of new medical knowledge. Starting from scratch with the UMLS CDROM ... I took less than three months using semantic network.

trivia: possibly because of the UMLS work, summer of 2002 got a call asking us to respond to an IC-ARDA (since renamed IARPA) unclassified BAA that was about to close ... basically said that none of the agency tools did the job. We got in response, had some (unclassified) meetings ... and then dead silence, didn't know what happened until success of failure articles appeared. Meetings were a little strange w/o a clearance ... although when I taught computer/security classes in the early 70s (right after graduating and joining IBM), they would tell me offline that they knew where I was every day of my life back to birth and challenged me to name a date (this was before the Church commission and I guess they justified it because they ran so much of my software).

other sowa trivia (his tome on rise & fall of IBM's Future System effort):
http://www.jfsowa.com/computer/memo125.htm

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
success of failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning IC-ARDA/IARPA BAA
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022c.html#120 Programming By Committee
https://www.garlic.com/~lynn/2022c.html#40 After IBM
https://www.garlic.com/~lynn/2021i.html#53 The Kill Chain
https://www.garlic.com/~lynn/2021g.html#66 The Case Against SQL
https://www.garlic.com/~lynn/2021f.html#68 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2019e.html#129 Republicans abandon tradition of whistleblower protection at impeachment hearing
https://www.garlic.com/~lynn/2019e.html#40 Acting Intelligence Chief Refuses to Testify, Prompting Standoff With Congress
https://www.garlic.com/~lynn/2019.html#82 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2019.html#49 Pentagon harbors culture of revenge against whistleblowers
https://www.garlic.com/~lynn/2018e.html#6 The Pentagon Is Building a Dream Team of Tech-Savvy Soldiers
https://www.garlic.com/~lynn/2017i.html#11 The General Who Lost 2 Wars, Leaked Classified Information to His Lover--and Retired With a $220,000 Pension
https://www.garlic.com/~lynn/2017h.html#23 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2017c.html#47 WikiLeaks CIA Dump: Washington's Data Security Is a Mess
https://www.garlic.com/~lynn/2017c.html#5 NSA Deputy Director: Why I Spent the Last 40 Years In National Security
https://www.garlic.com/~lynn/2017b.html#35 Former CIA Analyst Sues Defense Department to Vindicate NSA Whistleblowers
https://www.garlic.com/~lynn/2016h.html#96 This Is How The US Government Destroys The Lives Of Patriotic Whistleblowers
https://www.garlic.com/~lynn/2016f.html#40 Misc. Success of Failure
https://www.garlic.com/~lynn/2016b.html#62 The NSA's back door has given every US secret to our enemies
https://www.garlic.com/~lynn/2016b.html#39 Failure as a Way of Life; The logic of lost wars and military-industrial boondoggles
https://www.garlic.com/~lynn/2014c.html#85 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2014c.html#66 F-35 JOINT STRIKE FIGHTER IS A LEMON

other posts mentioning semantic network dbms
https://www.garlic.com/~lynn/2023c.html#36 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2022d.html#74 WAIS. Z39.50
https://www.garlic.com/~lynn/2021g.html#56 The Case Against SQL
https://www.garlic.com/~lynn/2021g.html#52 The Case Against SQL
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2021c.html#28 System/R, QBE, IMS, EAGLE, IDEA, DB2
https://www.garlic.com/~lynn/2014g.html#40 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2012k.html#51 1132 printer history
https://www.garlic.com/~lynn/2012j.html#78 locks, semaphores and reference counting
https://www.garlic.com/~lynn/2009o.html#26 Some Recollections
https://www.garlic.com/~lynn/2009o.html#11 Microprocessors with Definable MIcrocode
https://www.garlic.com/~lynn/2006w.html#11 long ago and far away, vm370 from early/mid 70s
https://www.garlic.com/~lynn/2006v.html#48 Why so little parallelism?
https://www.garlic.com/~lynn/2006v.html#47 Why so little parallelism?
https://www.garlic.com/~lynn/2004q.html#31 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004f.html#7 The Network Data Model, foundation for Relational Model

--
virtualization experience starting Jan1968, online at home since Mar1970

Ingenious librarians

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Ingenious librarians
Date: 08 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2023d.html#11 Ingenious librarians

topic drift regarding Sowa's FS imploding tome. At the time my wife reported to head of one of the FS "sections" and thought all the blue sky stuff was really neat ... but also noted there was very little work on real implementations. I was at the cambridge science center ... and would ridicule some of the FS people that would come by for visit (drawing analogies with a long running cult film down in central sq. ,,, which wasn't exactly career enhancing activity) One of the final nails in FS coffin was work by the Houston Science Center that showed 370/195 applications ported to FS machine made out of the fastest available hardware would have throughput of 370/145 ... about 30 times slowdown, not just slower than the FS 370 emulation ... the 370 emulation was then used for the 3081 ... accounting for the 3081 having such an enormous increase in number of circuits (especially compared to other IBM mainframes). Note reference to Amdahl having much better price/performance ... single processor Amdahl had about the same raw processing as the aggregate of the two processor 3081K but much higher throughput ... i.e. aggravated by two processor MVS multiprocessor support claiming only 1.2-1.5 times a single processor (related to the enormous MVS multiprocessor overhead).

This was related to having been roped into working on a 16-processor 370 and con'ing the 3033 processor group on working on it in their spare time (lot more interesting than remapping 168 logic to 20% faster chips) ... when somebody tells the head of POK that it could be decades before POK's favorite son operating system (MVS) would have effective 16-processor support (POK doesn't ship 16-processor until after turn of century, nearly 25yrs later). Some of us are invited to never visit POK again ... and the 3033 processor engineers told to not get distracted again ... heads down on 3033.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled, and/or compare-and-swap instruction posts
https://www.garlic.com/~lynn/subtopic.html#smp
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

post referring to FS implosion in this IBM downfall post
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Has the Pentagon Learned from the F-35 Debacle?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Has the Pentagon Learned from the F-35 Debacle?
Date: 08 June, 2023
Blog: Facebook
Has the Pentagon Learned from the F-35 Debacle?
https://www.pogo.org/analysis/2023/06/has-the-pentagon-learned-from-the-f-35-debacle

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war

The Pentagon Labyrinth
https://www.pogo.org/podcasts/pentagon-labyrinth
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html
http://dnipogo.org/labyrinth/

... Boyd quote
http://www.theamericanconservative.com/articles/john-boyds-art-of-war/

"Here too Boyd had a favorite line. He often said, 'It is not true the Pentagon has no strategy. It has a strategy, and once you understand what that strategy is, everything the Pentagon does makes sense. The strategy is, don't interrupt the money flow, add to it.'"

... snip ...

some pasts F35 posts
https://www.garlic.com/~lynn/2022f.html#25 Powerless F-35s
https://www.garlic.com/~lynn/2022c.html#105 The Bunker: Pentagon Hardware Hijinks
https://www.garlic.com/~lynn/2022c.html#78 Future F-35 Upgrades Send Program into Tailspin
https://www.garlic.com/~lynn/2021i.html#88 IBM Downturn
https://www.garlic.com/~lynn/2021g.html#87 The Bunker: Follow All of the Money. F-35 Math 1.0 Another portent of problems
https://www.garlic.com/~lynn/2021e.html#88 The Bunker: More Rot in the Ranks
https://www.garlic.com/~lynn/2021e.html#46 SitRep: Is the F-35 officially a failure? Cost overruns, other issues prompt Air Force to look for "clean sheet" fighter
https://www.garlic.com/~lynn/2021d.html#0 THE PENTAGON'S FLYING FIASCO. Don't look now, but the F-35 is afterburnered toast
https://www.garlic.com/~lynn/2021c.html#82 The F-35 and other Legacies of Failure
https://www.garlic.com/~lynn/2021c.html#8 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2021b.html#100 The U.S. Air Force Just Admitted The F-35 Stealth Fighter Has Failed
https://www.garlic.com/~lynn/2018b.html#117 F-35: Still No Finish Line in Sight
https://www.garlic.com/~lynn/2018b.html#17 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2016e.html#61 5th generation stealth, thermal, radar signature
https://www.garlic.com/~lynn/2016b.html#96 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#91 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#90 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#89 Computers anyone?
https://www.garlic.com/~lynn/2016.html#75 American Gripen: The Solution To The F-35 Nightmare
https://www.garlic.com/~lynn/2015f.html#20 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015b.html#75 How Russia's S-400 makes the F-35 obsolete
https://www.garlic.com/~lynn/2014j.html#43 Let's Face It--It's the Cyber Era and We're Cyber Dumb
https://www.garlic.com/~lynn/2014g.html#48 The Pentagon Is Playing Games With Its $570-Billion Budget
https://www.garlic.com/~lynn/2014d.html#69 Littoral Warfare Ship
https://www.garlic.com/~lynn/2014c.html#86 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2014c.html#66 F-35 JOINT STRIKE FIGHTER IS A LEMON
https://www.garlic.com/~lynn/2014c.html#51 F-35 JOINT STRIKE FIGHTER IS A LEMON
https://www.garlic.com/~lynn/2014c.html#4 Defense Department Needs to Act Like IBM to Save Itself
https://www.garlic.com/~lynn/2013c.html#54 NBC's website hacked with malware
https://www.garlic.com/~lynn/2013b.html#68 NBC's website hacked with malware
https://www.garlic.com/~lynn/2012i.html#24 Interesting News Article
https://www.garlic.com/~lynn/2012f.html#88 Defense acquisitions are broken and no one cares
https://www.garlic.com/~lynn/2012f.html#68 'Gutting' Our Military
https://www.garlic.com/~lynn/2012e.html#25 We are on the brink of historic decision [referring to defence cuts]
https://www.garlic.com/~lynn/2011p.html#142 We are on the brink of a historic decision [referring to defence cuts]
https://www.garlic.com/~lynn/2011k.html#49 50th anniversary of BASIC, COBOL?
https://www.garlic.com/~lynn/2011k.html#42 Senator urges DoD: Do better job defending F-35

--
virtualization experience starting Jan1968, online at home since Mar1970

Rent/Leased IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Rent/Leased IBM 360
Date: 08 June, 2023
Blog: Facebook
Tab machines were rented/leased based in part on "duty cycle" ... related to wear&tear on the machines (running faster, longer, etc). 360s were rented/leased based on "system meter" that ran whenever the CPU and/or any channel was "busy" ... even internal IBM datacenters. After the Virtual Machine commercial online spinoffs from the Cambridge Science Center ... both the spinoffs and CSC put in a lot of work on having systems available 7x24 ... but allow system meter to stop when system was quiet (especially offshift when nothing might be going on) ... offshift unattended, dark room operation; channel programs that allowed channel to go idle ... but immediately operational when characters were coming in; CPU in wait state when nothing active. CPU(s) and all channels had to be idle for at least 400ms for the 360 system meter to stop. Note: long after IBM had switched to sales from rent/lease ... POK's favorite operating system (MVS) still had a timer task that woke up every 400ms (guaranteeing that system meter would never stop).

I had taken 2 credit hour intro to Fortran/computers and at the end of the semester was hired to port 1401 MPIO to 360/30. The student keypunch room had 407 programmed for 80x80 print ... students would use the 407 to print their card programs. During the class, I taught myself how to program the 407 plug board to do other stuff, but was careful return it to normal operation when done.

Univ had been sold 360/67 for TSS/360, replacing 709 (tape->tape) with 1401 (front-end for unit record, tapes manually moved between 1401 and 709 drives). Pending arrival of 360/67, the 1401 was replaced with 360/30. 360/30 had 1401 emulation and MPIO could have continued with no additional effort ... but I guess I was part of getting 360 experience. The univ datacenter was shutdown (& powered off) over the weekend ... but they let me have the datacenter for the whole weekend (although 48hrs w/o sleep made Monday morning classes a little hard) ... I got a bunch of hardware&software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... after a few weeks, I had a 2000 card assembler program. I also learn to start the weekend cleaning tape drives and printer, disassemble card reader/punch, cleaning, reassembling, etc. I'm guessing that Univ. got great rate on the 360/30 rent/lease (especially for my 48hr weekend time). Sometimes production finished early and I arrive Sat. morning with everything power off. 360/30 power on sequence would sometime fail. I learn to place all controllers in CE-mode and 360/30 would power up. I then individually do controller power on, return the controller to normal mode (which would normally succeed).

Within a year of taking the intro class, the 360/67 arrived and I was hired fulltime responsible for OS/360 (TSS/360 never came to production fruition, so machine ran as 360/65 with os/360). Student Fortran jobs ran under a second on 709 (tape->tape). Initially with 360/65 OS/360, they ran over a minute. I install HASP, cutting the time in half. Then I start highly customized STAGE2 SYSGEN to carefully place datasets and PDS members to optimize arm seek and PDS directory multi-track search, cutting another 2/3rds to 12.9sec. Student Fortran never gets better than 709 until I install Univ. of Waterloo's WATFOR.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
virtual machine-based online service posts
https://www.garlic.com/~lynn/submain.html#online

some past posts mentioning MPIO, student Fortran, Stage2 Sysgen, etc
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2018c.html#86 OS/360
https://www.garlic.com/~lynn/2017h.html#49 System/360--detailed engineering description (AFIPS 1964)
https://www.garlic.com/~lynn/2017f.html#36 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2014.html#23 Scary Sysprogs and educating those 'kids'
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing 747

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing 747
Date: 08 June, 2023
Blog: Facebook
I took a two credit hr intro to Fortran/Computers ... Univ had been sold a 360/67 for TSS/360 to replace 709/1401. Then within a year of taking intro class, the 360/67 had come in and I was hired fulltime responsible for OS/360 (TSS/360 never came to production fruition so ran as 360/65 with OS/360). Then before I graduate, I'm hired into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment). I think Renton datacenter possibly largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton manager and CFO, who only had a 360/30 for payroll up at Boeing field (although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff).

747#3 is flying the skies of Seattle getting flt certification. Also there is a cabin mockup just south of Boeing Field where they claim that 747 would always be served by at least four jetways (because of so many people) ... who has ever seen 747 at a gate with four jetways??? I also got to know some of the Boeing engineers who were now having to commute from Renton area through Seattle up to the new 747 plant in Everett.

After I graduate, I join the IBM Cambridge Science Center (instead of staying at Boeing). At IBM, one of my hobbies was enhanced production operating systems for internal datacenters ... including online sales&marketing support HONE was long-time customer. Some of my 1st overseas (Atlantic and Pacific) is HONE asked me to go along for the first non-US HONE installs. Facebook&HONE trivia: mid-70s, the US HONE datacenters were consolidated in Silicon Valley. When Facebook 1st moves into silicon valley, it is into a new bldg built next door to the former US HONE datacenter.

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

a few past posts mentioning Boeing CFO, 747, HONE
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017j.html#104 Now Hear This-Prepare For The "To Be Or To Do" Moment

... the last product we did at IBM was HA/CMP ... it started out as HA/6000 for NYTimes to migrate their newspaper system (ATEX) from VAXCluster to RS/6000, I rename it HA/CMP (High Availability Cluster Multi-Processing) when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors ... that was before cluster scale-up was transferred for announce as IBM Supercomputer and we were told we couldn't work on anything with more than four processors. We had number of marketing tours over both Atlantic and Pacific. One of the best 1st class 747 was San Fran to Hong Kong on Singapore Airlines ... rated #1 in world (talking to some United people about it, they said it was because Singapore cabin attendants weren't unionized).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

... most flying I did was after getting blamed for online computer conferencing (folklore was when corporate executive committee was told, 5of6 wanted to fire me), I was left to live in San Jose, but transferred to Yorktown and had to commute to YKT a couple times a month (keeping office in SJR and then Almaden after research moved up the hill ... and also wing of offices and labs in Los Gatos). I world work in San Jose on Monday and then get TWA44 redeye monday night SFO to JFK, arriving in YKT very early and then getting TWA8?? (Tel Aviv->Rome->JFK) back late Friday to SFO. When TWA collapsed, switched to Pan Am ... and when Pan Am sold its "pacific" 747s to United (to concentrate on Atlantic), switched to American (there were times I would fly a United 747 and recognize old PanAm plane serial) lost huge number of TWA and then PanAm "miles". some computer conferencing mentioned here
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
virtualization experience starting Jan1968, online at home since Mar1970

Grace Hopper (& Ann Hardy)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Grace Hopper (& Ann Hardy)
Date: 09 June, 2023
Blog: Facebook
Grace Hopper
https://en.wikipedia.org/wiki/Grace_Hopper

at least she didn't work for IBM ... Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89

Ann Hardy is a crucial figure in the story of Tymshare and time-sharing. She began programming in the 1950s, developing software for the IBM Stretch supercomputer. Frustrated at the lack of opportunity and pay inequality for women at IBM -- at one point she discovered she was paid less than half of what the lowest-paid man reporting to her was paid -- Hardy left to study at the University of California, Berkeley, and then joined the Lawrence Livermore National Laboratory in 1962. At the lab, one of her projects involved an early and surprisingly successful time-sharing operating system.

... snip ...

If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/

Much more Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167

Ann rose up to become Vice President of the Integrated Systems Division at Tymshare, from 1976 to 1984, which did online airline reservations, home banking, and other applications. When Tymshare was acquired by McDonnell-Douglas in 1984, Ann's position as a female VP became untenable, and was eased out of the company by being encouraged to spin out Gnosis, a secure, capabilities-based operating system developed at Tymshare. Ann founded Key Logic, with funding from Gene Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl mainframes. After closing Key Logic, Ann became a consultant, leading to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

Gnosis/KeyKOS trivia: I was brought in to review Gnosis as part of the spinoff to Key Logic.

TYMSHARE trivia:
https://en.wikipedia.org/wiki/Tymshare
In Aug1976, TYMShARE starts providing its CMS-based online computer conferencing system (precursor to modern social media), "free", to IBM mainframe user group SHARE
https://www.share.org/
as VMSHARE, archives here:
http://vm.marist.edu/~vmshare

After transferring to San Jose Research ... I got to wander around IBM and customer datacenters in silicon valley ... and frequently drop in on Tymshare, Dialog, and others ... and/or see them at the monthly BAYBUNCH user group meetings hosted by Stanford SLAC. I cut a deal with Tymshare to get a monthly tape dump of all VMSHARE files for putting up on internal IBM systems (including the world-side, online sales&marketing support HONE systems, the US HONE datacenters had all been consolidated up in Palo Alto in the mid-70s ... Facebook trivia: when facebook 1st moves into silicon valley, it is into a new bldg built next door to the former HONE datacenter).

The biggest problem I had with VMSHARE, were the IBM lawyers concerned that exposure to customer information would contaminate internal employees (and/or conflict with what they were being told about customers). An earlier example was in 1974 where CERN made a report freely available to SHARE comparing VM370/CMS and MVS/TSO; copies inside IBM were stamped IBM Confidential - Restricted (2nd highest security classification) ... available on a need to know basis only.

virtual machine-based online service posts
https://www.garlic.com/~lynn/submain.html#online

some past posts mentioning Ann Hardy
https://www.garlic.com/~lynn/2023c.html#97 Fortran
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022g.html#92 TYMSHARE
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021k.html#0 Women in Computing
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021h.html#98 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM MVS RAS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM MVS RAS
Date: 09 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS

As undergraduate in the 60s, I was redoing lots of os/360 and cp/67, including dynamic adaptive resource management ... lots of my cp/67 stuff, the science center would pickup and ship in the release

note in 23jun1969 unbundling announce, IBM began charging for (application) software (but made the case that kernel software would still be "free"), SE services, maintenance, etc.

decade ago, I was asked to find the decision to add virtual memory to all 370s. I found (former) employee on the executive's staff ... basically os360 MVT storage management was so bad, that regions had to be specified four times larger than used ... as a result, a typical 1mbyte 370/165 only ran four concurrent regions, insufficient to keep processor busy and justified. Going to 16mbyte virtual memoy, number of regions could be increased by a factor of four times with little or no paging. Pieces of the email exchange in this old archived post
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

During the Future System effort in the 70s (totally different and going to completely replace 370), internal politics were shutting down 370 efforts (the lack of new 370 during the period is credited with giving 370 clone makers, their market foothold). I continued to work on 370 all during FS, including ridiculing their activity (drawing analogy with long running cult film down in central sq). When FS finally implodes, there is mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. Some more FS ref.
http://www.jfsowa.com/computer/memo125.htm

In the morph from CP67->VM370, a lot of CP67 function was dropped and/or simplified (including much of the stuff I had done as undergraduate in the 60s). Note when I first joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters ... and as datacenters were migrating to VM370, I spent some amount of time bringing VM370 up to CP67 level ... for CSC/VM ... including dynamic adaptive resource management. The rise of clone 370s (and FS implosion), resulted in reversing decision to not charge for kernel software ... incrementally, first just charging for new stuff (until the 80s when all kernel software was charged for). Initially, they selected a lot of my (internal datacenter) stuff for the kernel charging guinea pig (and I had to spend a lot of time with lawyers and business people regarding kernel software charging policies/practices).

There was some resource management expert from corporate that said that he wouldn't sign-off on my "resource manager" without manual tuning parameters; that the current state of the art (i.e. MVS) had huge boat loads of manual tuning parameters. I tried to explain to him what "dynamic adaptive" met, but it fell on deaf ears. So I created my "SRM Tuning Parameters" (as a MVS spoof), providing all the documents and formulas on how the manual tuning parameters work. Some years later, I would ask customers if they got the joke ... aka from "Operations Research" and "degrees of freedom" ... i.e. the dynamic adaptive code had larger "degrees of freedom" and could override any manual tuning parameter set (aka all the information was in the manuals ... but very few realized/recognized that the formulas weren't "static" ... but were constantly being dynamically updated). some related post
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/

Then in the 80s, with the change to charging for all kernel software, came the OCO-wars ("object code only", no longer shipping source code).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dyanmic adaptive resource management
https://www.garlic.com/~lynn/subtopic.html#fairshare
23jun1969 unbundling announcement
https://www.garlic.com/~lynn/submain.html#unbundle

some past posts mentioning dynamic adaptive resource manager & SRM
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2021g.html#0 MVS Group Wrath
https://www.garlic.com/~lynn/2010m.html#81 Nostalgia
https://www.garlic.com/~lynn/2005p.html#31 z/VM performance
https://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2001e.html#51 OT: Ever hear of RFC 1149? A geek silliness taken wing

23jun1969 unbundling, software charging, oco-wars
https://www.garlic.com/~lynn/2015h.html#38 high level language idea
https://www.garlic.com/~lynn/2013o.html#45 the nonsuckage of source, was MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013b.html#26 New HD
https://www.garlic.com/~lynn/2011i.html#17 Got to remembering... the really old geeks (like me) cut their teeth on Unit Record
https://www.garlic.com/~lynn/2011i.html#7 Do you remember back to June 23, 1969 when IBM unbundled

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880 Disk Controller

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880 Disk Controller
Date: 10 June, 2023
Blog: Facebook
When I transferred to San Jose Research, I get to wander around IBM and customer datacenters in silicon valley, including disk engineering (bldg14) and product test (bldg15) across the street. They are doing 7x24, pre-scheduled, stand-alone testing. They say they had recently tried MVS, but it had 15min MTBF in that environment. I offer rewrite the I/O supervisor, making it bullet proof and never fail, so they can do any amount of on-demand, concurrent testing, greatly improving productivity. Then bldg15 get engineering 3033 (#3 or #4), first outside of POK. Testing only takes a percent or two of CPU, so we scavage a 3830 disk controller and a couple strings (8 drives/string) of 3330s for a (3033 dedicated) private online service for a few of us.

One monday morning I get an irate call demanding to know what I had done to the online service, I say nothing, what had they done. Takes an hour, but it turns out over the weekend, they had swapped the 3830 disk controller with a engineering 3880 disk controller ... which had significantly worse throughput. It turns out that 3880 had special hardware path for up to 3mbyte/sec transfer ... it had an enormously slower control processor for everything else (mainframe channel protocol had significant amount of "chatter" back&forth for every operation. To try and mask how slow it was, they tried to present an early end-of-operation interrupt (before all the end of operation had completed) ... making it appear as if it was taking less elapsed time than it actually did (hoping to mask the addtional overhead, figuring they could finish before the software interrupt handling and device driver got around to redriving the next I/O). Some of the stand-alone tests with MVS showed no problem, but when I rewrote the I/O supervisor for bullet proof and never fail, I had also radically cut the pathlength from interrupt to device redrive. I claimed my 370 software pathlength redrive was very close to XA/370 hardware assisted I/O (in large part motivated by the enormous MVS pathlength between end-of-I/O interrupt and device redrive)
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/

Diagnostic was: 3880 end of I/O interrrupt, redrive tried to start next I/O, 3880 was still busy and forced to respond SIO CC=1, SM+BUSY, queue the attempted request, wait for CUE I/O interrupt (when 3880 was really finished), try restarting the queued I/O request. It degraded overall throughput and especially affected numerous trivial interactive response that required any disk I/O. So it was back to the drawing board for 3880 tweaking.

I had also redone multi-path channel instead of primary & alternates, implemented load balancing ... improved 3830-based operations ... but 3880 had also done some masking, to save information about channel connection for previous operation. If it was hit with a request from a different channel, it do flush on the saved information and rebuild it for a different channel connection aka if I was dealing with 3880, instead of doing channel load balancing, I had to explicitly doing primary/alternate. Of course, didn't help with multiple-CEC, loosely-coupled environment, where requests coming in from channels involving completely different systems.

I had that problem with the IBM US HONE (online sales&marketing system up in Palo Alto), it had what I believed was largest, single-system image, loosely-coupled complex in the world with load-balancing and fall-over across the complex (initially eight single processor systems, but upgraded to two processor/system when I added CP67 multiprocessor support into VM370 Release 3). Large 3330 DASD farm and 3330 string-switch (each string of eight 3330s connected to two 3830 controllers) and four channel 3830s (resulted in each 3330 string having eight system connectivity). Some more HONE
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/

A couple years later, 3880/3380 was getting ready to ship and FE had a suite of 57 emulated errors they thought be expected. For all errors, MVS still would fail (requiring manual re-IPL) and in 2/3rds of the cases leaving no indication of the cause of the failure (had joke about MVS recovery applied to covering up original error) ... old email reference
https://www.garlic.com/~lynn/2007.html#email801015

Then for trout/3090, they had designed the number of channels to give targeted throughput, assuming 3880/3380 was same channel busy as 3830 but with 3mbyte/sec transfer. However, even with all the protocol and processing tweaking, 3880 still had significantly higher channel busy ... and 3090 realized that they had to significantly increase the number of channels to achieve targeted system throughput (to compensate for the higher 3880 channel busy). Marketing eventually respun the increased number of channels as wonderful I/O machine ... when it actually was to compensate for the slow 3880 processing. The increase in number of channels required an extra TCM and 3090 group semi-facetiously said they would bill the 3880 group for the increase in 3090 manufacturing cost.

posts mentioning to get to play disk engineer in bldg 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3880 Disk Controller

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3880 Disk Controller
Date: 10 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller

from the annals of let no good deed go unpunished (besides wrath of the MVS group 1st trying to get me separated from IBM company and then making sure I would never get any awards for the work) ... for earlier version of the 3880 "early" interrupt ... could discover an error condition during cleanup (after ending interrupt) and would present an "unsolicited" unit-check interrupt. I had to argue with them that violated channel architecture. Eventually they scheduled conference call with POK channel engineers and POK tells them it violates channel architecture. After that they demand that I participate in all POK calls (explaining that all the senior san jose engineers that really understood channel architecture had departed for San Jose area startups during late 60s and early 70s). It was changed to hold a pending unit check until the next attempt to start I/O to that device ... and present CC=1, csw stored with unit check (not ideal, but at least conforms to channel architecture).

As part of normal device testing, had to deal with huge number of problems, including hot-I/O and missing interrupts. Normal handling that didn't resolve to normal processing, start with HDV/CLRIO instructions for the device address, then try closed loop with HDV/CLRIO for every controller subchannel address (for newer controllers would force controller to reset&re-IMPL), then next was CLRCH for the channel address.

Now 3033 was quick&dirty remap of 168 to 20% faster chips. They also used a 158 engine w/o the 370 microcode and just the integrated channel microcode for channel director (i.e. 3031 was two 158 engines, one that just had the 370 microcode and the 2nd that just had the integrated channel microcode, 3032 was 168 reworked to use 158-engine with channel microcode for external channels). In any case, quickly hitting channel director with CLRCH for all six channel addresses, would prompt it to (also) do a reset/re-impl (then possibly try CLRCH for all channel addresses)

I had also wrote test for channel/controller speed. VM370 had a 3330/3350/2305 page format that had "dummy short block" between 4k page blocks. Standard channel program had seek followed by search record, read/write 4k ... possibly chained to search, read/write 4k (trying to maximize number of 4k transfers in single rotation). For records on the same cylinder, but different track, in the same rotation, had to add a seek track. The channel and/or controller time to process the embedded seek could exceed the rotation time for the dummy block ... causing an additional revolution. Test would format cylinder with maximum possible dummy block size (between page records) and then start reducing to minimum 50byte size ... checking to see if additional rotation was required. 158 (also 303x channel director and 3081) had the slowest channel program processing. I also got several customers with non-IBM processors, channels, and disk controllers to run the tests ... so had combinations of IBM and non-IBM 370s with IBM and non-IBM disk controller.

145, 148, and 4341 (integrated channels) all could easily do head switch within rotation timing. bldg15 also got an early engineering 4341 and it turned out the 4341 integrated channel microcode was so fast that with a few tweaks, it was used to test 3880/3380 3mbyte/sec transfers. trivia: Jan1979, I was con'ed into doing 4341 benchmarks for national lab that was looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami)

posts mentioning to get to play disk engineer in bldg 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning 303x channel director
https://www.garlic.com/~lynn/2021c.html#71 What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2012o.html#22 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

some posts mentioning testing for head/track switch timing
https://www.garlic.com/~lynn/2017k.html#36 IBM etc I/O channels?
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014k.html#26 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013e.html#61 32760?
https://www.garlic.com/~lynn/2013c.html#74 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets
https://www.garlic.com/~lynn/2008r.html#55 TOPS-10
https://www.garlic.com/~lynn/2007k.html#17 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2006t.html#19 old vm370 mitre benchmark
https://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
https://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction
https://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004h.html#43 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/195

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/195
Date: 10 June, 2023
Blog: Facebook
IBM System/360 Model 195
https://en.wikipedia.org/wiki/IBM_System/360_Model_195

The IBM System/360 Model 195 is a discontinued IBM computer introduced on August 20, 1969. The Model 195 was a reimplementation of the IBM System/360 Model 91 design using monolithic integrated circuits.[1] It offers "an internal processing speed about twice as fast as the Model 85, the next most powerful System/360"

... snip ...

I had taken two credit hr intro to fortran/computers and then within a year of the class, univ hired me fulltime responsible for os/360. Then before I graduate, I was hired fulltime into small group in the Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment. I thought Renton datacenter possibly largest in the world, couple hundred million in IBM gear, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around the machine room. They did have a 360/75 that had a black parameter rope around the area and when it ran classified programs, guards would be at corners and black felt draped over the console lights and 1403s in areas where printing was visible. They were also looking at replicating Renton at new 747 plant up in Renton (disaster scenario where Mt. Rainier heats up and the resulting mud slide takes out Renton datacenter).

After I graudate, I join IBM Cambridge Science Center (instead of staying at Boeing) ... and the 370/195 (slightly modified 360/195) group tries to rope me into helping with multithreading. The 370/195 had 64stage pipeline but many codes (with conditional branches) would drain the pipeline, resulting in cutting throughput in half. They wanted to simulate (multithreaded) two processor design (two threads, potentially each running at 1/2 speed) ... somewhat leveraging red/blue from ACS/360.

Project effectively died when it was decided to add virtual memory to all 370 machines (and it was deemed too difficult to add virtual memory to 195). Decade ago, was asked to track down decision (MVT storage management was so bad, that regions had to be four time larger than use, resulting in typical 1mbyte 370/165 only able to run four regions concurrent, insufficient to keep system busy and justified). Some archived email exchange
https://www.garlic.com/~lynn/2011d.html#73

Trivia: 370/165 engineers were claiming that if they had to implement the full 370 virtual memory architecture, the announce date would have to slip six months. In order to reclaim that six months, various architecture features were dropped (that other models had already implemented and some software had already been programmed to use).

End of ACS/360, it was shutdown because they were afraid it would advance state-of-the-art too fast and IBM would loose control of the market ... also mentions red/blue and features that show up nearly 25yrs later with ES/9000
https://people.cs.clemson.edu/~mark/acs_end.html

Sidebar: Multithreading

In summer 1968, Ed Sussenguth investigated making the ACS/360 into a multithreaded design by adding a second instruction counter and a second set of registers to the simulator. Instructions were tagged with an additional "red/blue" bit to designate the instruction stream and register set; and, as was expected, the utilization of the functional units increased since more independent instructions were available.

IBM patents and disclosures on multithreading include:

US Patent 3,728,692, J.W. Fennel, Jr., "Instruction selection in a two-program counter instruction unit," filed August 1971, and issued April 1973.

US Patent 3,771,138, J.O. Celtruda, et al., "Apparatus and method for serializing instructions from two independent instruction streams," filed August 1971, and issued November 1973. [Note that John Earle is one of the inventors listed on the '138.]

"Multiple instruction stream uniprocessor," IBM Technical Disclosure Bulletin, January 1976, 2pp. [for S/370]


... snip ...

old email about Hawk and Tomasulo leaving IBM
https://www.garlic.com/~lynn/2019c.html#email810423

SMP, multiprocessing, tightly-coupled, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

posts mentioning 195, multithreading, acs/360, and virtual memory
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#12 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2021k.html#46 Transaction Memory
https://www.garlic.com/~lynn/2021h.html#51 OoO S/360 descendants
https://www.garlic.com/~lynn/2021d.html#28 IBM 370/195
https://www.garlic.com/~lynn/2019d.html#62 IBM 370/195
https://www.garlic.com/~lynn/2018b.html#80 BYTE Magazine Pentomino Article
https://www.garlic.com/~lynn/2017g.html#39 360/95
https://www.garlic.com/~lynn/2017c.html#26 Multitasking, together with OS operations
https://www.garlic.com/~lynn/2017.html#90 The ICL 2900
https://www.garlic.com/~lynn/2017.html#3 Is multiprocessing better then multithreading?
https://www.garlic.com/~lynn/2016h.html#98 A Christmassy PL/I tale
https://www.garlic.com/~lynn/2016h.html#45 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#7 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2015c.html#26 OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainframe - Forbes
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns

--
virtualization experience starting Jan1968, online at home since Mar1970

'Tax Scam': Republicans Follow Debt Ceiling Fight by Proposing Tax Cuts for Wealthy

From: Lynn Wheeler <lynn@garlic.com>
Subject: 'Tax Scam': Republicans Follow Debt Ceiling Fight by Proposing Tax Cuts for Wealthy
Date: 10 June, 2023
Blog: Facebook
'Tax Scam': Republicans Follow Debt Ceiling Fight by Proposing Tax Cuts for Wealthy. "If House Republicans were actually serious about the deficit, they would demand wealthy corporations pay their fair share in taxes," one advocate said.
https://www.commondreams.org/news/republicans-follow-debt-ceiling-fight-with-tax-cuts-for-wealthy

recent post with more history ...

H.R.3746 - Fiscal Responsibility Act of 2023
https://www.garlic.com/~lynn/2023d.html#2 H.R.3746 - Fiscal Responsibility Act of 2023

Note claims about the large tax cuts in the previous administration, that they going to help everybody ... and the big corporations would use the money to help employees. The poster child for the large corporate tax cuts, claimed the tax cut bonanza would go for employee bonuses. However going to the company's website, it said that employees would each receive up to $1000 bonus. Then if you go find the number of employees and multiple it times $1000 (assuming that every employee actually got the full $1000) it would amount to 2% of their tax cut bonanza ... the rest went to stock buybacks and executive bonuses (the actualy total employee bonus payouts was more like 1% of their tax cut bonanza ... as was the other corporate beneficiaries of the administration largess). Like the Federal Comptroller General was saying in the early part of the century ... not just congress, but large percentage of the population don't know how to do middle school arithmetic.

some past posts referencing "bonus" claims (& stock buyacks)
https://www.garlic.com/~lynn/2023c.html#31 Federal Deficit and Debt
https://www.garlic.com/~lynn/2019e.html#99 Is America ready to tackle economic inequality?
https://www.garlic.com/~lynn/2018e.html#12 Companies buying back their own shares is the only thing keeping the stock market afloat right now

posts mentioning (90s) fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
tax fraud, tax evasion, tax loopholes, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/195

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/195
Date: 12 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195

... MVT->SVS->MVS addenda, recent MVS/CSA posts
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023d.html#0 IBM MVS RAS

But then MVS was running into the CSA constraint ... started out as 1mbyte common segment area for passing parameters and return results between applications and subsystems in different address spaces (in large part because of the extensive OS/360 API pointer-passing convention), however requirements were somewhat proportional to concurrent applications and subsystems ... and it ballooned into CSA (common system area) and by 3033, it was frequently running 5-6mbytes (plus the 8mbyte MVS kernel system image mapped into every address space), left only 2-3mbytes ... and was threatening to become 8mbytes .... leaving no space for applications (or subsystems) in the 16mbyte virtual address spaces.

Somebody managed to retrofit a little of XA/370 access registers to 3033 as dual-address space mode to try and alleviate some of the pressure on CSA requirements (i.e. subsystems could access application address space areas directly). Trivia: person responsible for dual-address retrofit left IBM for HP (I got IBM email worried I might go with him) and later was one of the primary Itanium architects.

other trivia, Burlington was constantly fighting MVS&CSA requirements in their VLSI FORTRAN applications ... they were constantly bumping against the 7mbyte limit anytime they had to change, enhance, update the application (dedicated/custom large systems that had special MVS systems that were restricted to single mbyte CSA).

We offered to do a dedicated VM370+CMS for their FORTRAN VLSI apps ... where CMS was only taking 128kbytes out of 16mbytes (instead of minimum MVS+CSA 9mbytes out of 16mbytes) ... but that met with enormous POK opposition.

some past posts mentioning MVS, CSA, & Burlington Fortran VLSI apps
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#49 IBM 3033 Personal Computing
https://www.garlic.com/~lynn/2022b.html#19 Channel I/O
https://www.garlic.com/~lynn/2021i.html#67 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#106 The (broken) economics of OSS
https://www.garlic.com/~lynn/2010d.html#81 LPARs: More or Less?
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2007m.html#58 Is Parallel Programming Just Too Hard?

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370, SMP, HONE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370, SMP, HONE
Date: 12 June, 2023
Blog: Facebook
related:
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters ... and the online sales&marketing support (soon to become world-wide) HONE systems were long time customers. In mid-70s, the US HONE systems were consolidated in Palo Alto (when FACEBOOK 1st moved into silicon valley, it was into new bldg built next door to the former HONE datacenter).

note in the initial morph of CP67->VM370, lots of stuff was simplified and/or dropped (including SMP support). I spent some time in 1974, putting lots of stuff back into VM370 until spring of 1975 ready to replace CSC CP67 with Release2-based CSC/VM distribution. Some old CSC/VM email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

US HONE had been enhanced with single-system image, loosely-coupled with load-balancing and fall-over (in what I believe the largest single system in the world) ... HONE1-HONE8. Then for CSC/VM on VM370 Release 3 base, I implement multiprocessor support ... initially for HONE so they could add a 2nd processor to each system (aggregate 16 processors). IBM 370 multiprocessor systems had the processor clock slowed down by 10%, so two processors were only 1.8 (2*.9) of a single processor. I did some highly optimized multiprocessor pathlengths and then some slight of hand that effectively resulted in cache affinity ... which improved the cache hit rates ... resulting in HONE multiprocessors throughput running around twice single processor (aka highly optimized multiprocessor support and improved cache hit rate offsetting the slow down in processor clock).

IBM eventually wants to ship multiprocessor support in VM370 Release 4 ... but they were temporarily caught in an impasse. The 23Jun1969 unbundling announcement included starting to charge for (application) software (but kernel software would still be free). During the Future System period (totally different and would completely replace 370), internal politics was killing off 370 efforts (the lack of new 370 products during the period is claimed to give the 370 clone makers their market foothold). With the demise of FS, there was mad rush to get stuff back into the 370 product pipeline, including kicking off the quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm

Also after FS implodes and probably because of the rise of 370 clone makers, there was also a decision to start charging for "add-on" kernel software (planned transition by the early 80s that all kernel software would be charged-for) ... with initial caveat that kernel hardware support would still be free and free kernel software couldn't have dependency on charged for software.

A bunch of my stuff was selected to be the "charged-for" guinea pig (and I had to spend a lot of time with lawyers and business planners on charged-for kernel software policies). While it was viewed as VM370 Release 3 "dynamic adaptive resource manager" ... I had included a whole bunch of other stuff ... including major rewrites of code necessary for multiprocessor operation ... but not the actual multiprocessor code itself. So we come to the release of VM370 Release 4 with multiprocessor support ... but it was dependent on a large amount of code in the Release 3 "charged-for resource manager" (and ran afoul of the criteria that free multiprocessor support couldn't have a "charged-for" prereq). The executive solution was to move 90% of the "resource manager" code into the free VM370 Release 4 base (in order for free multiprocessor hardware to work) ... and the VM370 Release 4 "resource manager" (now with only about 10% of the code) would still be priced the same as the Release 3 "resource manager".

IBM eventually rolls into the 80s with the 3081 and the transition to all kernel software being charged for, VM370 is renamed VM/SP and VM/HPO. However, there was no longer going to be any (high-end) single processor machines. IBM's ACP/TPF (airline control program) didn't have multiprocessor support and there was concern that the whole ACP/TPF market would move to Amdahl (their latest single processor had about the same processor rate as the aggregate of 3081 two processor, but w/o any SMP software overhead). Eventually, a partial solution was various tweaks to VM370 multiprocessor support so that any ACP/TPF running in single processor virtual machine would have higher throughput ... but a side-effect was that nearly all other VM370 multiprocessor customers saw a 10-15% degradation. That was when I started getting calls ... to see how much I could do to compensate for what had been done specifically for the ACP/TPF market.

The initial 3081D claims were each processor faster than 3033, but many benchmarks are showing about the same or slower than 3033. They then double the cache size (for the "3081K"), so aggregate of two processors is now claimed about same as Amdahl's single processor, but having the extra SMP software overhead).

A big part of the VM370 ACP/TPF change was 1) separate a lot of virtual machine I/O processing (SIO/SIOF simulation and DMKCCW, ccw translation) into separate work and 2) add a huge number of SIGP signals (where in the ACP/TPF dedicated environment, its assumed the 2nd processor is otherwise idle, but just additional SIGP handling was taking 10% for all multiprocessor customers) on the off-chance that CP could do ccw translation on a 2nd, idle processor, asynchronously with ACP/TPF execution in virtual machine (assuming in ACP/TPF dedicated environment, making the 2nd "idle" processor somewhat analogous to a SAP, dedicated processor).

... eventually (initially targeted for acp/tpf) they come out with 3083 (3081 with one of the processors removed) and the 10% processor clock slowdown is removed ... instead of running them at 90%.

a couple old archived posts discussing the vm370 multiprocessor hack for ACP/TPF
https://www.garlic.com/~lynn/2017k.html#39 IBM etc I/O channels?
https://www.garlic.com/~lynn/2008c.html#83 CPU time differences for the same job

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled, and/or compare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
23jun1969 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundling
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#wsclock

Long winded account of IBM downhill slide .... starting with Learson's failure to block the bureaucrats, careerists, and MBAs destroying the Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/boyd-ibm-wild-duck-discussion-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/multi-modal-optimization-old-post-from-6yrs-ago-lynn-wheeler
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/

IBM Downturn/Downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370, SMP, HONE

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370, SMP, HONE
Date: 13 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE

After the decision to add virtual memory to all 370s ... a ref
https://www.garlic.com/~lynn/2011d.html#73

there was joint project between Endicott and Cambridge Science Center to add 370 virtual memory simulation to CP/67 (running on real 360/67). Then there was changes to CP67 to run with 370 virtual memory architecture. A year before the first engineering 370 with virtual memory support was operational, "CP67-I" (sometimes called CP/370) was in regular operation, running in a 370 virtual machine under "CP67-H", running on real 360/67. Then three people from San Jose, came out and worked on adding 3330 & 2305 device support to "CP/370" (also referred to as (CP67-SJ") ... which was in regular use on real virtual memory 370s around the corporation (well before VM370 becomes production).

Some of the CSC people had transferred from 4th to the 3rd flr, taking over the IBM Boston Programming Center for the VM/370 development group. In the morph of CP/67->VM/370 lots of code was greatly simplified or dropped (including multiprocessor, tightly coupled, support).

There was lots of performance work going on at CSC and I created the "autolog" command for CP67 automated benchmarking. There was enormous amount of workload/activity monitoring data from extended periods for systems all around the corporation. This was used to create a synthetic workload application. It was possible to specify various levels of I/O intensive, compute intensive, memory intensive, interactive ... to simulate a variety of different workload characteristics. Then scripts could be created that used "autolog" to activate simulated users executing running various kinds of workloads. Trivia: autolog was also almost immediately adopted for automatically bring up service virtual machines in production environments (now frequently referred to as virtual appliance by present day virtual machine systems).

So after "autolog" and automated benchmarks were working for VM/370 ... 1st thing was run some moderate stressful benchmarks ... and VM/370 was crashing nearly every time ... so the next step was redo port the CP67 kerrnel serialization mechanism (eliminated the constant crashes as well as "zombie" tasks/users). Now could start comparing "vanilla" VM370 with enhanced "VM370" with increasing amount of features ported from CP67 to VM370 (including dynamic adaptive resource management, lots of kernel restructuring facilitating multiprocessor implementation, page replacement algorithms, optimized pathlengths, etc).

An APL-based, analytical system model was done at the science center ... and would start feeding in configuration & workloads and comparing system model analysis with the real benchmark results (helping validate both the system model and dynamic adaptive resource management in real benchmarks). Versions of the analytical system model were used by HONE for the "single system image" cluster load balancing and the Performance Predictor (branch people could enter customer configuration and workload and "ask what-if" questions about configuration and/or workload changes.

Leading up to releasing the guinea pig for charged-for kernel add-on ... did 2000 automated benchmarks that took three months elapsed time to run. First were thousand benchmarks with several hundred that varied workload and configurations uniformly across the domain of all live monitored operations ... and then there was couple hundred "stress" benchmarks (outside of normal monitored operations). For the 2nd thousand automated benchmarks, the analytical system model was modified to define configuration/workload benchmarks searching for anomalous combinations (that the dynamic adaptive resource management might not correctly handle).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
automated benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
hone posts
https://www.garlic.com/~lynn/subtopic.html#hone
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging algoritm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

some posts/refs about CSC performance work
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2016b.html#54 CMS\APL

other posts mentioning Performance Predictor
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#7 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#51 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#46 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#32 HONE story/history

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370, SMP, HONE

From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370, SMP, HONE
Date: 14 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#24 VM370, SMP, HONE
and
https://www.garlic.com/~lynn/2023d.html#17 MVS RAS

There was some resource management expert from corporate that said that he wouldn't sign-off on shipping my "dynamic adaptive resource manager" without manual tuning parameters; that the current state of the art (i.e. MVS) had huge boat loads of manual tuning parameters. I tried to explain to him what "dynamic adaptive" met, but it fell on deaf ears. So I created my "SRM Tuning Parameters" (as a MVS spoof), providing all the documents and formulas on how the manual tuning parameters work. Some years later, I would ask customers if they got the joke ... aka from "Operations Research" and "degrees of freedom" ... i.e. the dynamic adaptive code had larger "degrees of freedom" and could override any manual tuning parameter set (aka all the information was in the manuals ... but very few realized/recognized that the formulas weren't "static" ... but were constantly being dynamically updated). some related post
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler

dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

specific past archive posts mentioning dynamic adaptive, SRM, and degrees of freedom
https://www.garlic.com/~lynn/2023b.html#23 IBM VM370 "Resource Manager"
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022e.html#90 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#66 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2010m.html#81 Nostalgia
https://www.garlic.com/~lynn/2008p.html#4 Strings story
https://www.garlic.com/~lynn/2008p.html#1 My Funniest or Most Memorable Moment at IBM
https://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2001e.html#51 OT: Ever hear of RFC 1149? A geek silliness taken wing

--
virtualization experience starting Jan1968, online at home since Mar1970

Is America Already in a Civil War?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Is America Already in a Civil War?
Date: 15 June, 2023
Blog: Facebook
Is America Already in a Civil War? An expert in militant Christian nationalism weighs in: "There are little fires all around us"
https://www.rollingstone.com/politics/politics-features/donald-trump-far-right-brad-onishi-america-civil-war-interview-1234772017/

I'm struck that you're describing overlaps between extreme statements by far-right politicians and far-right religious figures, without a clear division. It's another instance of how Christian nationalism is pervasive in right-wing American politics at the moment. Looking at these quotes alone, it's hard to tell who's speaking. Is it a Republican official? Is it a Fox News host? Is it a pastor? They sound eerily similar -- even if some of the buzzwords are different.

... snip ...

End of the American dream? The dark history of 'America first'
https://www.theguardian.com/books/2018/apr/21/end-of-the-american-dream-the-dark-history-of-america-first

In fact, "America first" has a much longer and darker history than that, one deeply entangled with the country's brutal legacy of slavery and white nationalism, its conflicted relationship to immigration, nativism and xenophobia. Gradually, the complex and often terrible tale this slogan represents was lost to mainstream history - but kept alive by underground fascist movements. "America first" is, to put it plainly, a dog whistle.

... snip ...

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

posts mentioning militant white/Christian nationalism
https://www.garlic.com/~lynn/2023c.html#52 The many ethics scandals of Clarence and Ginni Thomas, briefly explained
https://www.garlic.com/~lynn/2022g.html#35 Lee Considered: General Robert E. Lee and Civil War History
https://www.garlic.com/~lynn/2022g.html#14 It Didn't Start with Trump: The Decades-Long Saga of How the GOP Went Crazy
https://www.garlic.com/~lynn/2022f.html#78 Wealthy Donors Bankroll Christian Nationalists to Sustain Unregulated Capitalism
https://www.garlic.com/~lynn/2022f.html#31 The Rachel Maddow Show 7/25/22
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022c.html#113 The New New Right Was Forged in Greed and White Backlash
https://www.garlic.com/~lynn/2022.html#107 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2021j.html#104 Who Knew ?
https://www.garlic.com/~lynn/2021i.html#59 The Uproar Ovear the "Ultimate American Bible"
https://www.garlic.com/~lynn/2021g.html#67 Does America Like Losing Wars?
https://www.garlic.com/~lynn/2021f.html#80 After WW2, US Antifa come home
https://www.garlic.com/~lynn/2021d.html#65 Apple, Amazon and Google slam 'discriminatory' voting restriction laws
https://www.garlic.com/~lynn/2021c.html#96 How Ike Led
https://www.garlic.com/~lynn/2021c.html#94 How Ike Led
https://www.garlic.com/~lynn/2021c.html#93 How 'Owning the Libs' Became the GOP's Core Belief
https://www.garlic.com/~lynn/2021c.html#79 Racism's Loud Echoes in America
https://www.garlic.com/~lynn/2021c.html#23 When Nazis Took Manhattan
https://www.garlic.com/~lynn/2021.html#66 Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.garlic.com/~lynn/2021.html#30 Trump and Republican Party Racism
https://www.garlic.com/~lynn/2020.html#6 Onward, Christian fascists
https://www.garlic.com/~lynn/2019e.html#161 Fascists
https://www.garlic.com/~lynn/2019e.html#62 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019e.html#57 Homeland Security Dept. Affirms Threat of White Supremacy After Years of Prodding
https://www.garlic.com/~lynn/2019d.html#84 Steve King Devised an Insane Formula to Claim Undocumented Immigrants Are Taking Over America
https://www.garlic.com/~lynn/2019d.html#5 Don't Blame Capitalism
https://www.garlic.com/~lynn/2019c.html#65 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019.html#81 LUsers
https://www.garlic.com/~lynn/2018f.html#19 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018d.html#102 The Persistent Myth of U.S. Precision Bombing
https://www.garlic.com/~lynn/2018c.html#107 Post WW2 red hunt
https://www.garlic.com/~lynn/2018b.html#60 Revealed - the capitalist network that runs the world
https://www.garlic.com/~lynn/2017j.html#24 What if the Kuomintang Had Won the Chinese Civil War?
https://www.garlic.com/~lynn/2017d.html#55 Should America Have Entered World War I?
https://www.garlic.com/~lynn/2017.html#15 Separation church and state
https://www.garlic.com/~lynn/2015h.html#33 The wars in Vietnam, Iraq, and Afghanistan were lost before they began, not on the battlefields
https://www.garlic.com/~lynn/2015g.html#33 1973--TI 8 digit electric calculator--$99.95

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3278

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3278
Date: 17 June, 2023
Blog: Facebook
For 3278, they moved a lot of electronics back into the control unit (to reduce manufacturing costs), but change drastically drove up coax protocol chatter and latency (compared to 3277 which we could also do all sorts of hardware tweaks stuff to improve human factors). typical 3278 coax protocol latency was .3-.5secs compared to .086 for 3277. This was during the period with lots of studies showed quarter second response improved productivity (which 3278 was never able to meet). I also had some number of my enhanced production operating systems with .11sec interactive system response; with 3277 .086 gave .199sec response seen by human at terminal (and of course it was really rare MVS that even had 1sec system response ... so many would have never noticed difference between 3277 & 3278). A letter was written to the 3278 Product Administrator about 3278 worse than 3277 for interactive computing ... eventual response was that 3278 was designed for data entry (aka electronic keypunch), not interactive computing.

Also with IBM/PC terminal emulation cards, 3277 cards would have 3-4 times the upload/download throughput of 3278 cards (because of the difference in coax protocol overhead and latency). My 3277s were replaced when I could get IBM PC terminal cards that even beat 3277 ... then workstations with fast ethernet cards (and routers with high-speed mainframe interfaces).

Trivia1: AWD had done their own 4mbit token-ring cards for PC/RT (had PC/AT bus). Then with microchannel, RS/6000, AWD restricted restricted to PS2 16mbit token/ring cards ... which had been severely performance kneecapped by the communication group (in their fierce battle with client/sever and distributed computing, attempting to preserve their dumb terminal paradigm and install base). Turned out that PS2 microchannel 16mbit token-ring cards even had lower throughput than the PC/RT 4mbit token-ring cards. PC/RT 4mbit token-ring cards left $800 16mbit token-ring cards in the dust ... and $69 10mbit Ethernet cards @8.5mbits, easily beat both.

Trivia2: Communication group was fighting hard to block mainframe TCP/IP from being released ... but that was apparently reversed by some influential customers. The communication group then changed their strategy, claiming that since they had corporate responsibility for everything that crossed datacenter walls, it had to be release through them. What shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 processor (it was also ported to MVS by simaltion of some VM370 interfaces, further driving up overhead). I then did the changes for RFC1044 and in some tuning tests at Cray Research between IBM 4341 and Cray, got sustained 4341 channel throughput, using only a modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

there was enough electronics in 3277, they were able to hack a tektronics large graphics screen into the side of 3277 for "3277ga" (aka graphics adapter) ... sort of inexpensive 2250/3250.

Mainframe & RFC1044 TCP/IP support posts
https://www.garlic.com/~lynn/subnetwork.html#1044
3tier, ethernet, high-speed routers posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

past posts referencing difference 3277 & 3278 hardware response (.086 versus .3-.5)
https://www.garlic.com/~lynn/2023c.html#42 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023.html#93 IBM 4341
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022e.html#18 3270 Trivia
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#0 Colours on screen (mainframe history question) [EXTERNAL]
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019c.html#4 3270 48th Birthday
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016f.html#1 Frieden calculator
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016d.html#42 Old Computing
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016.html#15 Dilbert ... oh, you must work for IBM
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2015d.html#33 Remember 3277?
https://www.garlic.com/~lynn/2015.html#38 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014m.html#127 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble
https://www.garlic.com/~lynn/2014f.html#41 System Response
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012n.html#37 PDP-10 and Vax, was System/360--50 years--the future?
https://www.garlic.com/~lynn/2012m.html#37 Why File transfer through TSO IND$FILE is slower than TCP/IP FTP ?
https://www.garlic.com/~lynn/2012m.html#15 cp67, vm370, etc
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

--
virtualization experience starting Jan1968, online at home since Mar1970

Ingenious librarians

From: Lynn Wheeler <lynn@garlic.com>
Subject: Ingenious librarians
Date: 18 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2023d.html#11 Ingenious librarians
https://www.garlic.com/~lynn/2023d.html#12 Ingenious librarians

NLM catalog had so many entries that searches frequently required 5-10 terms, with "and", "or", and "not" logic ... like "author1 and author2 but not author3". They had something like 80 different categories of information that they indexed on ... like "author", "keywords", etc. The search database (BDAM files) had a prebuilt list of all articles that matched each item. So "author1 or author2" search ... required retrieving the list records for "author1" and "author2" and combining the lists (join) .... for "author1 and author2" search .... required retrieving the record lists for "author1" and "author2" and finding the items in both lists (intersection). One of the things they discovered, that with more than 3-4 and/or/not operations in single search, people frequently inverted the meanings in their mind and specified the inverse of what they met. There were something like 30k+ skilled search medical librarians using NLM (on behalf of their clients, physicians, medical professionals, etc).

I recently tried to do something analogous (using simple command line terms). On facebook it is possible to request all of your (group) postings, comments, and comment-replies in data order. If you repeatedly go to end of the web page ... it will keep adding (older) entries. At some point it gets tedious ... but I recently stopped at all entries back to 1jan2021. I then saved the webpage ... 23mbytes ... filled with an enormous amount of HTML gorp. I then wrote a page of code that eliminated the html gorp reducing it to 4mbytes. This is flat file of (2549) URLs for the post/comment/comment-reply (back to 1jan2021) followed by the text.

A keyword search ("grep") has the identifier of the facebook URL plus the keyword(s) piped to awk that only saves the URLs for those search terms. While "grep" can do a single search for multiple terms, it won't result in knowing if every item had all multiple terms, just at least one of terms. To have every item that had all terms ("and"), need separate search for each term, generate a separate file of URLs for each term and then find the URLs that appeared in every file.

For instance, individual searches for "author1" and "author2" generates files "author1" and "author2". URLs that have both "author1" and "author2":

grep -F -f author1 author2 >both


will result in file of URLs with both "author1" and "author2"

cat author1 author2 | sort -u >either


will result in file of URLs with either "author1" and/or "author2":

grep -v -F -f author1 author2 >onlyauthor2


will result in file of URLs with author2 but not author1

so write a script that emulates NLM search findurl author1 and (author2 or author3) not author4

cat author2 author3 | grep -F -f author1 | grep -v -F -f author4 | sort -u


findurl (author1 and author2) or author3 not author4:

( grep -F -f author1 author2 ; cat author3 ; ) | grep -v -F -f author4 | sort -u


example of my posts/comments/replies (since 1jan2021) to this group that mention "future system" , 3033 , and (MVS) CSA:
https://www.facebook.com/groups/ProfessionalMainframers/permalink/1583169168704783/
https://www.facebook.com/groups/ProfessionalMainframers/permalink/1939471236407906/
https://www.facebook.com/groups/ProfessionalMainframers/posts/1553682758320091/?comment_id=1553946134960420&reply_comment_id=1557359387952428
https://www.facebook.com/groups/ProfessionalMainframers/posts/1594183834269983/?comment_id=1594405594247807&reply_comment_id=1594528840902149
https://www.facebook.com/groups/ProfessionalMainframers/posts/1665315483823484/?comment_id=1665353617153004

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning Future System, 3033, and (MVS) CSA
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2019d.html#115 Assembler :- PC Instruction
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2015b.html#60 ou sont les VAXen d'antan, was Variable-Length Instructions that aren't
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3278

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3278
Date: 18 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278

Some of the MIT CTSS/7094 people went to the IBM science center on the 4th flr and did virtual machines, the networking used for the majority of the corporate network (and also the corporate sponsored univ. BITNET network), lots of online apps and performance technology. Virtual machines started with CP40/CMS using a 360/40 with virtual memory hardware added, it morphs into CP67/CMS when 360/67 standard with virtual memory became available. CTSS RUNOFF (document formatting) was ported to CMS as "SCRIPT". GML was then invented at the science center in 1969 and GML tag processing added to SCRIPT. After a decade GML morphs into ISO SGML standard, and after another decade it morphs into HTML at CERN.

One of the first mainstream IBM documents done in CMS SCRIPT was the 370 (architecture) "RED BOOK". CMS SCRIPT command line option would either format the full "RED BOOK" or the "Principles of Operation" subset (minus lots of engineering notes, alternatives considered, implementation details).

In the early 80s, I wanted to demonstrate that REX (before renamed REXX and released to customers) wasn't just another pretty scripting language and chose to redo the product large assembler dump/problem analysis app in REX with ten times the function and ten times the performance (some slight of hand there). I also collected soft copies of as many system documents and packaged so that as part of analysis, a person could access related system information. For whatever reason it wasn't released to customers ... even tho it was in use by most PSRs and most internal datacenters.

Old email from the 3092 (3090 service processor, pair of 4361s) that wanted to ship it in the 3092
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
gml/sgml posts
https://www.garlic.com/~lynn/submain.html#sgml

posts mentioning 370 Architecture Red Book and Principles of Operation
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2022b.html#92 Computer BUNCH
https://www.garlic.com/~lynn/2019e.html#2 To Anne & Lynn Wheeler, if still observing
https://www.garlic.com/~lynn/2018c.html#15 Old word processors
https://www.garlic.com/~lynn/2017k.html#19 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2013o.html#84 The Mother of All Demos: The 1968 presentation that sparked atech revolutio
https://www.garlic.com/~lynn/2013c.html#21 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012l.html#73 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2010p.html#60 Daisywheel Question: 192-character Printwheel Types
https://www.garlic.com/~lynn/2009s.html#1 PDP-10s and Unix
https://www.garlic.com/~lynn/2009j.html#67 DCSS
https://www.garlic.com/~lynn/2008l.html#47 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2007r.html#23 Abend S0C0
https://www.garlic.com/~lynn/2007d.html#32 Running OS/390 on z9 BC
https://www.garlic.com/~lynn/2006o.html#59 Why no double wide compare and swap on Sparc?
https://www.garlic.com/~lynn/2006h.html#55 History of first use of all-computerized typesetting?
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005e.html#53 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004c.html#1 Oldest running code
https://www.garlic.com/~lynn/2003k.html#45 text character based diagrams in technical documentation
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003d.html#76 reviving Multics
https://www.garlic.com/~lynn/2002h.html#69 history of CMS
https://www.garlic.com/~lynn/2002g.html#52 Spotting BAH Claims to Fame
https://www.garlic.com/~lynn/2002b.html#48 ... the need for a Museum of Computer Software

--
virtualization experience starting Jan1968, online at home since Mar1970

Profit Inflation Is Real

From: Lynn Wheeler <lynn@garlic.com>
Subject: Profit Inflation Is Real
Date: 19 June, 2023
Blog: Facebook
Profit Inflation Is Real
https://www.nakedcapitalism.com/2023/06/profit-inflation-is-real.html

Having lived through the 1970s inflation, it's important to add: companies then suffered a profit squeeze. Labor had bargaining power (and in many cases, cost of living provisions in their labor contracts). Stagflationary conditions meant some buyers would reduce purchases at full cost pass throughs, so they opted to preserve market share at the expense of margins. So it is important to understand that companies being able to preserve their margins during inflation is a historical anomaly, even before getting to the question of whether they are increasing them.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3278

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3278
Date: 19 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278

About same time in late 80s, (the same or different study) had a comparison of VTAM LU6.2 to (workstation) NFS ... with VTAM having 160k instruction pathlength compared to 5k for workstation NFS (and that the large number of VTAM LU6.2 buffer copies had worse effect on the processor throughput than 160k instruction pathlength). In the 90s, the communication group hired a silicon valley contractor to implement TCP/IP directly in VTAM. What he initially demo'ed ran much faster than LU6.2. He was then told that everybody "knows" that a "proper" TCP/IP implementation is much slower than LU6.2 ... and they would only be paying for a "proper" implementation.

Early 80s, I had HSDT project for T1 and faster computer links. Besides working on TCP/IP ... I also had (Ed Hendrick's) VNET/RSCS (used for internal corporate network and corporate sponsored univ BITNT) that relied on (VM370) synchronous "diagnose" instruction to read/write 4k disk blocks ... which serialized/blocked ... limiting throughput to around 6-8 4k blocks/sec (24k-32k bytes/sec). I did a spool file rewrite in VS/Pascal running in virtual address space (borrowing an "upcall" pascal implementation from Palo Alto). I needed 300kbytes/sec (70-80 4kbyte) disk throughput for each T1 link (ten fold increase) ... which implemented asynchronous interface, multi-record transfer, contiguous allocation, read-ahead, write-behind, etc. Part of the issue was standard IBM controllers only supported up to 56kbit/sec links. Mid-80s, communication group even did executive committee presentations that customers wouldn't want T1 links until sometime in the 90s; they did survey of customer "fat-pipes" (multiple parallel 56kbit/sec links treated as single logical link ... finding they dropped to zero around 6 or 7 links; what they didn't know (or didn't want to tell) was typical telco tariff for T1 was about the same as 6 or 7 56kbit links (around 300kbits-350kbits, customers just switch to full T1 with non-IBM controller). We did a trivial survey finding 200 customers with full T1 and non-IBM controllers.

We were also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and finally an RFP was released (calling for T1, in part based on what we aleady had running) ... Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/
Note: winning bid put in 440kbit/sec links and then possibly to claim T1 network, they put in T1 trunks with telco multiplexor running multiple 440kbit links per trunk

Various executives were distributing all sorts of misinformation email about SNA could be used for NSFNET. Somebody collected it and forwarded to us ... previously posted to net (heavily snipped and redacted to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

The communication group did eventually came out with the 3737 ... a whole boatload of memory and M68Ks with mini-VTAM that spoofed the host VTAM (claiming to be local CTCA & immediately ACKing transmission so the HOST VTAM would keep transmitting) ... however on even relatively short-haul full-duplex T1s (1.5mbit concurrent in each direction) it was limited to about 2mbit/sec (degrading as latency&distance increased; also 3737 only simulated SNA VTAM<->VTAM, so nothing else could use it). Some old archived 3737 email
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2011g.html#email881005

Wiki page for Ed (he passed aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
SJMN news article gone 404, but lives on at wayback machine
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
some more Ed from his web pages (at wayback machine)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
long-winded (linkedin) post
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioing redoing VM370 spool file system in vs/pascal
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 370/195

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 370/195
Date: 19 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195

IBM 370/195 discussion (in another facebook group)

Some of the MIT CTSS/7094 people went to the IBM science center on the 4th flr and did virtual machines, the networking used for the majority of the corporate network (and also the corporate sponsored univ. BITNET network), lots of online apps and performance technology. Virtual machines started with CP40/CMS using a 360/40 with virtual memory hardware added, it morphs into CP67/CMS when 360/67 standard with virtual memory became available.

Shortly after joining IBM, the 195 group wants me to help them with multithreading. The 370/195 had 64stage pipeline but many codes (with conditional branches) would drain the pipeline, resulting in cutting throughput in half. They wanted to simulate (multithreaded) two processor design (two threads, potentially each running at 1/2 speed) ... somewhat leveraging red/blue from ACS/360 ... mentioned in this account of end of ACS/360 (supposedly killed because some were worried it would advance state-of-the-art too fast and IBM would loose control of the market, also mentions ACS/360 features that show up more than 20yrs later with ES/9000
https://people.cs.clemson.edu/~mark/acs_end.html

The 195 group told me that major change from 360/195 to 370/195 was adding instruction retry for various kinds of soft hardware errors.

a decade+ ago, I was asked if I could track down decision adding virtual memory to all 370s (found former employee that had been staff to the executive) ... aka MVT storage management was so bad that regions had to be specified four times larger than used. As a result typical 1mbyte 370/165 could only run four regions concurrently ... insufficient to keep processor busy and justified. Adding virtual memory could increase the number of concurrently running regions by a factor of four times with little or no paging. Archived post with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

It was deemed too difficult to add virtual memory to 370/195 and further 195 work was dropped.

165 trivia: The 165 engineers then start complaining that if they have to implement the full 370 virtual memory architecture, the virtual memory announce would have to slip six months. So they drop the extra features and other models that have already implement the full architecture (and any software already implemented using the dropped features) have to pull back to the 165-subset.

internet trivia: early 80s, I had HSDT project, T1 and faster computer links ... and we were working with NSF director; was suppose to get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and finally an RFP was released (calling for T1, in part based on what we already had running) ... Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

Note: winning bid put in 440kbit/sec links and then possibly to claim T1 network, they put in T1 trunks with telco multiplexor running multiple 440kbit links per trunk

360 was suppose to originally be ASCII ... but new unit record gear was late, so they had to use tab BCD hardware ... so shipped as EBCDIC assuming that it would change when new hardware was ready. account gone 404, but lives on at wayback machine
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other details
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM

At univ, I took two credit hr intro to fortran/computers. At the end of semester, I was hired to re-implement 1401 MPIO on 360/30. The univ had 709/1401 (709 tape->tape with 1401 used for unit record front-end). The univ was sold 360/67 to replace 709/1401 for tss/360 ... pending arrival of 360/67, the 1401 was temporarily replaced with 360/30. The 360/30 had 1401 microcode simulation ... which could run 1401 MPIO directly. I guess they hired me as part of gaining 360 experience. The univ shutdown the datacenter on weekends. I was giving a bunch of hardware and software manuals and turned loose in the datacenter weekends (although 48hrs w/o sleep made monday classes a little hard). I got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. ... and within a few weeks I had 2000 card assembler program. Then within a year of taking intro class (after 360/67 arrived), I was hired fulltime responsible for os/360 (tss/360 never came to production fruition, so ran as 360/65 w/os360).

When I started with responsibility for os/360, student fortran jobs took over a minute (compared to 709, where they took less than second). I install HASP, cutting the time in half. I then redo STAGE2 SYSGEN to careful place datasets and PDS members to optimize arm seek and multi-track search ... cutting time another 2/3rds to 12.9secs (student fortran never got better than 709 until I install Univ. Waterloo WATFOR).

Three people came out from the science center in early 1968 to install cp67 at the univ. (3rd after CSC itself and MIT Lincoln Labs) ... the univ 360/67 was at least half dedicated to running admin cobol (a lot ported from 709 cobol) and so I primarily played with cp67 during my dedicated weekend time (in addition to doing lots of work on os360). Part of fall68 (ibm mainframe user group) SHARE
https://www.share.org/
presentation on both os360 (at the time MFT14) and CP67
https://www.garlic.com/~lynn/94.html#18

I moved to MVT with next release "15/16", aka R15 was slipping and IBM combined the two releases. For MVT18, I also modified HASP and implemented 2741 and TTY/ASCII terminal support with editor (implementing the CMS editor syntax) for home-grown CRJE. The univ. had got a 2250M1 (360 channel controller interface) with the 360/67, but it wasn't used for much. MIT Lincoln Labs had done a 2250 fortran library for CP67/CMS and I modify the CMS editor to use the 2250 library ... getting an early fullscreen editor. Lincoln Labs had also worked on ARPANET interface for CP67.
https://www.rfc-editor.org/rfc/rfc306.txt

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment). I thot Renton datacenter possibly largest in the world with couple hundred million in IBM gear (and there was disaster plan to duplicate it up at the new 747 plant in Everett ... where if Mt. Rainier heats up, the resulting mud slide takes out Renton datacenter). Lots of politics between Renton director and Boeing CFO (who only had a 360/30 for payroll up at boeing field, although they enlarge the machine to install a 360/67 for me to play with when I wasn't doing other stuff). When I graduate, instead of staying at Boeing, I join IBM. I speculate that 370/195 group, in part wanted me to help with 195 multithreading because they need to get the 360/65MP MVT (multiprocessor) software running on 370/195.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
SMP, multiprocessor, loosely-coupled, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

recent posts mentioning acs/360 and sometimes 370/195 multi/hyper threading
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023c.html#68 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989

some recent 360 ascii posts
https://www.garlic.com/~lynn/2023.html#80 ASCII/TTY Terminal Support
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#100 IBM 360
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#51 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#91 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code

recent posts mentioning univ and BCS
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#11 360 Powerup
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#12 Programming Skills

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360s

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360s
Date: 20 June, 2023
Blog: Facebook
my long-winded tome about Learson failed in blocking the bureaucrats, careerists and MBAs from destroying Watson legacy at IBM ... then downward slide to early 90s when IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation to breaking up the company (a new CEO is brought in, suspending the breakup).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

my related comments in (this group's) 370/195 post ...
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195

I was introduced to John Boyd in early 80s and would sponsor his briefings at IBM ... he had lots of stories ... one was he was very vocal that the electronics across the trail wouldn't work ... so possibly as punishment, he is put in command of "spook base" (about the same time I'm at Boeing) ... one of his biographies says "spook base" was $2.5B "windfall" for IBM (ten times Boeing Renton datacenter). Some "spook base" refs:
https://en.wikipedia.org/wiki/Operation_Igloo_White
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html

note above mentions 2305 DASD ... so there would have been at least one 360/85 and/or possibly 370/195s (to attach 2305s) ... computers that aren't mentioned in the article.

John Boyd posts
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Mainframe Emulation

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Mainframe Emulation
Date: 21 June, 2023
Blog: Facebook
I had spent some time talking to funsoft (mainframe emulation)
https://web.archive.org/web/20240130182226/https://www.funsoft.com/
Sequent was their "preferred" (industrial) I86 platform ... and I did some consulting for Steve Chen (at the time Sequent CTO, before IBM bought them and shut them down).

currently there is Hercules
https://en.wikipedia.org/wiki/Hercules_(emulator)
https://curlie.org/Computers/Emulators/IBM_Mainframe/Hercules/

Note SLAC/CERN had done 370 subset (168e supposedly 3mips & 3081e supposedly 5-7mips) sufficient to run Fortran apps placed along accelerator ... doing initial data reduction from sensors
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf

a couple past posts mentioning mainframe emulation, funsoft, hercules
https://www.garlic.com/~lynn/2011c.html#93 Irrational desire to author fundamental interfaces
https://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes

--
virtualization experience starting Jan1968, online at home since Mar1970

Eastern Airlines 370/195 System/One

From: Lynn Wheeler <lynn@garlic.com>
Subject: Eastern Airlines 370/195 System/One
Date: 21 June, 2023
Blog: Facebook
Eastern Airlines (SABRE) System/One ran on 370/195. Trivia: my wife served short stint as chief architect for Amadeus (Euro res system built off System/One) ... she didn't last long, she sided with the Europeans on x.25 versus SNA and the IBM communication group got her replaced ... the Europeans went with x.25 anyway and the communication group's replacement got replaced.

a couple past posts mentioning Eastern Airlines System/One and Amadeus
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2012n.html#41 System/360--50 years--the future?
https://www.garlic.com/~lynn/2011.html#17 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)

--
virtualization experience starting Jan1968, online at home since Mar1970

"The Big One" (IBM 3033)

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: "The Big One" (IBM 3033)
Date: 22 June, 2023
Blog: Linkedin
The "Future System" effort in the early 70s was completely different from 370 and was going to completely replace it. Internal politics during FS was killng off 370 efforts (the lack of new 370 during FS is credited with giving 370 clone makers their market foothold, also lots of stories about sales&marketing having to repeatedly resort to FUD). When FS implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel. A lot more detail
http://www.jfsowa.com/computer/memo125.htm

I had gotten sucked into working on a 16processor 370 and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was really great until somebody told the head of POK that it could be decades before the POK favorite son operating system (MVS) had effective 16-way support (POK doesn't ship 16-way machine until after turn of century). Some of us were then instructed to never visit POK again (and 3033 processor engineers ordered to stop being distracted). Once 3033 was out the door, the processor engineers start on trout (3090) ... overlapped with onging 3081.

Note 165 was completing 370 instruction on avg. of every 2.1 processor cycles. The microcode was improved for 168-1 to avg of 1.6 processor cycles/370 instruction (and main memory that was nearly five times faster). For 168-3, processor cache size was doubled. For 3033, besides 20% faster chips, microcode was further optimized to avg of completing one 370 instruction every processor cycle ..... getting 3033 up to approx. 1.5 times 168-3. For 303x "channel director", they took 158 engine w/o the 370 microcode and just the integrated channel microcode. A 3031 was two 158 engines, one with just the 370 microcode and the 2nd with just the integrated channel microcode. A 3032 was 168-3 reworked to use the channel director (158 engine with integrated microcode) for external channels. A 3033 was 168 logic remapped to 20% faster chips and 158 engines with integrated channel microcode.
https://www.ibm.com/ibm/history/exhibits/3033/3033_album.html
https://www.ibm.com/ibm/history/exhibits/3033/3033_CH01.html

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, multiprocessor, tightly-coupled, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

some recent posts that mention 3033 and CSA (common segment/system area)
https://www.garlic.com/~lynn/2023d.html#28 Ingenious librarians
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#69 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#49 IBM 3033 Personal Computing
https://www.garlic.com/~lynn/2022b.html#19 Channel I/O
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
--
virtualization experience starting Jan1968, online at home since Mar1970

Online Forums and Information

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Forums and Information
Date: 22 June, 2023
Blog: Facebook
Trivia: I would attend BAYBUNCH customer user group meetings held at SLAC and also call on various customers like TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
In Aug1976, TYMSHARE made their CMS-based online computer conferencing free to the (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE files for putting up on internal network and internal systems (including world-wide, onlines sales&marketing support HONE systems). The biggest problem I had was with IBM lawyers who were concerned that internal IBM employees could be contaminated direclty exposed to customers (and/or what they were being told by IBM executives weren't what customers were actually saying).

Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167

Ann rose up to become Vice President of the Integrated Systems Division at Tymshare, from 1976 to 1984, which did online airline reservations, home banking, and other applications. When Tymshare was acquired by McDonnell-Douglas in 1984, Ann's position as a female VP became untenable, and was eased out of the company by being encouraged to spin out Gnosis, a secure, capabilities-based operating system developed at Tymshare. Ann founded Key Logic, with funding from Gene Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl mainframes. After closing Key Logic, Ann became a consultant, leading to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

trivia: I was brought in to review GNOSIS for its spin-off to Key Logic. Also I was blamed for online computer conferencing on the IBM internal network in the late 70s and early 80s ... folklore is when the corporate executive committee was told, 5of6 wanted to fire me. Then there was official software and approved, moderated, sanctioned forums (software supported both mailing list and server access modes).

trivia2: The cororate sponsored univ network, BITNET (sort of subset of the internal software)
https://en.wikipedia.org/wiki/BITNET

old email ref from former co-worker at science center (on year sabatical from France)
https://www.garlic.com/~lynn/2001h.html#email840320
got "listserv" in the mid-80s
https://en.wikipedia.org/wiki/LISTSERV
https://www.lsoft.com/products/listserv-history.asp

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3278

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3278
Date: 23 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#27 IBM 3278
https://www.garlic.com/~lynn/2023d.html#29 IBM 3278
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278

3270 related: In 1980, STL (originally was Coyote Lab, follow convention of naming for closest post office, then quickly renamed Santa Teresa Lab, since renamed Silicon Valley Lab) was bursting at the seams and they moving 300 people from the IMS group to offsite bldg with dataprocessing back to STL datacenter . They had tried "remote" 3270 ... but found the human factors totally unacceptable. I get con'ed into doing channel-extender support, so they can place channel connected 3270 controllers at the offsite bldg with no perceived difference in 3270 human factors at the offsite bldg and in STL. From the law of unintended consequences, system throughput went up 10-15%. The 3270 channel-attached controllers (relatively) really slow with high channel busy and had been spread across all the available channels (with DASD). Moving all the 3270 channel-attached controllers to offsite bldg using channel-extender boxes which were really fast with much lower channel busy for the same 3270 traffic (there was some consideration about moving all 3270 controllers behind the channel-extender boxes to achieve the increased system throughput).

The hardware vendor then tried to get IBM to release my support ... however there was a group in POK playing with some serial stuff that were afraid if it was in the market, it would make it harder to get their stuff approved ... and get the release vetoed.

Note: in 1988, the IBM branch office asks me to work with (national lab) LLNL, helping them get some serial stuff they are playing with standardized ... which quickly becomes Fibre Channel Standard ("FCS", including some stuff I had done in 1980, initially 1gbit, full-duplex, 2gbit aggregate, 200mbytes/sec). Then in 1990, POK gets their stuff released as ESCON with ES/9000 (when it was already obsolete, 17bytes/sec). Then some POK people get involved with FCS and define a heavy-weight protocol that radically reduces the native throughput, that eventually is released as FICON. I've been able to find publicly published "peak I/O" benchmark with z196 getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time there was a FCS announced for E5-2600 blade (commonly found in cloud megadatacenters) claimed over a million IOPS (two such FCS getting higher throughput than 104 FICON). trivia: using industry benchmark (number of iterations compared to 370/158 assumed to be 1MIPS), max configured z196 was 50BIPS and E5-2600 was 500BIPS (ten times more)

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Santa Teresa Lab

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Santa Teresa Lab
Date: 23 June, 2023
Blog: Facebook
posts included duplicate of
https://www.garlic.com/~lynn/2023d.html#38 IBM 3278

trivia: STL was originally going to be called Coyote Lab (after convention for naming for closest post office) ... then a SanFran professional ladies group staged demonstration on steps of capital (in Wash DC) and the name was quickly changed to STL (for closest cross-street).

A little ... 1983, branch office asks me to help with Berkeley 10M telescope ... they were testing some stuff (including switch from film to CCD) up at Lick Observatory ... and had visits up there. This was about the time Lick was lobbying the San Jose street lights switch from mercury to sodium ... wanting low-pressure rather than high-pressure sodium (which had much more light pollution). They eventually get an $80M grant from the Keck Foundation ... and it becomes the Keck 10M in Hawaii.

Other place was Los Gatos lab (many considered most scenic in IBM) ... where they gave me part of a wing with offices and labs. One of the projects I had was HSDT ... T1 and faster computer links ... including both terrestrial and satellite ... included custom-designed TDMA earth stations ... put in 3-node Ku band, two 4.5M dishes (Los Gatos lab and Yorktown) and 7M dish (Austin).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

posts mentioning Berkeley/Keck 10M & Lick
https://www.garlic.com/~lynn/2023b.html#9 Lick and Keck Observatories
https://www.garlic.com/~lynn/2022e.html#104 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022.html#67 HSDT, EARN, BITNET, Internet
https://www.garlic.com/~lynn/2021k.html#56 Lick Observatory
https://www.garlic.com/~lynn/2021g.html#61 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021c.html#60 IBM CEO
https://www.garlic.com/~lynn/2021c.html#25 Too much for one lifetime? :-)
https://www.garlic.com/~lynn/2021b.html#25 IBM Recruiting
https://www.garlic.com/~lynn/2019e.html#88 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019c.html#50 Hawaii governor gives go ahead to build giant telescope on sacred Native volcano
https://www.garlic.com/~lynn/2019.html#47 Astronomy topic drift
https://www.garlic.com/~lynn/2018f.html#71 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#22 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018d.html#76 George Lucas reveals his plan for Star Wars 7 through 9--and it was awful
https://www.garlic.com/~lynn/2018c.html#89 Earth's atmosphere just crossed another troubling climate change threshold
https://www.garlic.com/~lynn/2017g.html#51 Stopping the Internet of noise
https://www.garlic.com/~lynn/2016f.html#71 Under Hawaii's Starriest Skies, a Fight Over Sacred Ground
https://www.garlic.com/~lynn/2015g.html#97 power supplies
https://www.garlic.com/~lynn/2015.html#19 Spaceshot: 3,200-megapixel camera for powerful cosmos telescope moves forward
https://www.garlic.com/~lynn/2014h.html#56 Revamped PDP-11 in Brooklyn
https://www.garlic.com/~lynn/2014g.html#50 Revamped PDP-11 in Honolulu or maybe Santa Fe
https://www.garlic.com/~lynn/2014.html#76 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#8 We're About to Lose Net Neutrality -- And the Internet as We Know It
https://www.garlic.com/~lynn/2012o.html#55 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#86 OT: Physics question and Star Trek
https://www.garlic.com/~lynn/2010i.html#24 Program Work Method Question
https://www.garlic.com/~lynn/2009o.html#55 TV Big Bang 10/12/09
https://www.garlic.com/~lynn/2008f.html#80 A Super-Efficient Light Bulb
https://www.garlic.com/~lynn/2005l.html#9 Jack Kilby dead
https://www.garlic.com/~lynn/2004h.html#8 CCD technology

--
virtualization experience starting Jan1968, online at home since Mar1970

Oldest Internet email addresses

From: Lynn Wheeler <lynn@garlic.com>
Subject: Oldest Internet email addresses
Date: 23 June, 2023
Blog: Facebook
we got csnet
https://en.wikipedia.org/wiki/CSNET
gateway at SJR in fall of 1982 ... old email
https://www.garlic.com/~lynn/98.html#email821022
https://www.garlic.com/~lynn/2002p.html#email821122
https://www.garlic.com/~lynn/2008s.html#email821204
CSNET mentions trouble with conversion to internetworking & tcp/ip
https://www.garlic.com/~lynn/2002p.html#email830109
https://www.garlic.com/~lynn/2000e.html#email830202

my email CSNET: Wheeler@IBM-SJ ARPANET: Wheeler.IBM-SJ@UDel-Relay

old ref "The 100 Oldest Currently Registered Domain Names" from 25Jan2003, gone 404, but still at wayback machine
https://web.archive.org/web/20080102082530/http://www.whoisd.com/oldestcom.php
3/19/1986, IBM.COM (11th on list)

then right after interop-88 got 9-net ... old email
https://www.garlic.com/~lynn/2006j.html#email881216

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
interop-88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

--
virtualization experience starting Jan1968, online at home since Mar1970

The Architect of the Radical Right

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Architect of the Radical Right
Date: 25 June, 2023
Blog: Facebook
The Architect of the Radical Right. How the Nobel Prize-winning economist James M. Buchanan shaped today's antigovernment politics
https://www.theatlantic.com/magazine/archive/2017/07/the-architect-of-the-radical-right/528672/

Democracy in Chains: The Deep History of the Radical Right's Stealth Plan for America
https://www.amazon.com/Democracy-Chains-History-Radical-Stealth-ebook/dp/B01EH1EL7A/
pgxxvii/loc293-97:

it was training operatives to staff the far-flung and purportedly separate, yet intricately connected, institutions funded by the Koch brothers and their now large network of fellow wealthy donors. These included the Cato Institute, the Heritage Foundation, Citizens for a Sound Economy, Americans for Prosperity, FreedomWorks, the Club for Growth, the State Policy Network, the Competitive Enterprise Institute, the Tax Foundation, the Reason Foundation, the Leadership Institute, and more, to say nothing of the Charles Koch Foundation and Koch Industries itself.


pgxxviii/loc317-22:

I was able to do so because Koch's team had since moved on to a vast new command-and-control facility at George Mason called the Mercatus Center, leaving Buchanan House largely untended. Future-oriented, Koch's men (and they are, overwhelmingly, men) gave no thought to the fate of the historical trail they left unguarded. And thus, a movement that prided itself, even congratulated itself, on its ability to carry out a revolution below the radar of prying eyes (especially those of reporters) had failed to lock one crucial door: the front door to a house that let an academic archive rat like me, operating on a vague hunch, into the mind of the man who started it all.


pgxxxiv/loc420-23:

Koch never lied to himself about what he was doing. While some others in the movement called themselves conservatives, he knew exactly how radical his cause was. Informed early on by one of his grantees that the playbook on revolutionary organization had been written by Vladimir Lenin, Koch dutifully cultivated a trusted "cadre" of high-level operatives, just as Lenin had done, to build a movement that refused compromise as it devised savvy maneuvers to alter the political math in its favor.

... snip ...

Meet the Economist Behind the One Percent's Stealth Takeover of America. Nobel laureate James Buchanan is the intellectual linchpin of the Koch-funded attack on democratic institutions, argues Duke historian Nancy MacLean
https://www.ineteconomics.org/perspectives/blog/meet-the-economist-behind-the-one-percents-stealth-takeover-of-america

With Koch's money and enthusiasm, Buchanan's academic school evolved into something much bigger. By the 1990s, Koch realized that Buchanan's ideas -- transmitted through stealth and deliberate deception, as MacLean amply documents -- could help take government down through incremental assaults that the media would hardly notice. The tycoon knew that the project was extremely radical, even a "revolution" in governance, but he talked like a conservative to make his plans sound more palatable.

... snip ...

Kochland talks about Koch taking over the republican party and congress for 2011 enabled by Citizen United decision and enormous amounts of money ... running carefully selected candidates in primaries against incumbent, telling their candidate to stay home and not even having to campaign, winning with enormous number of paid-for attack adds ... but then battling with Trump and religious right after 2017 (Koch libertarian stealth take-over of the conservative Republican party)
https://www.amazon.com/Kochland-History-Industries-Corporate-America-ebook/dp/B07P5HCQ7G/
pg113/loc1898-1903:

The Libertarian Party sought to abolish a vast set of government agencies and programs, including Medicare, Medicaid, Social Security (which would be made voluntary), the Department of Transportation (and "all government agencies concerned with transportation," including the Federal Aviation Administration, which oversees airplane safety), the Environmental Protection Agency, the Department of Energy, the Food and Drug Administration, and the Consumer Product Safety Commission. And this is just a partial list. The party also sought to privatize all roads and highways, to privatize all schools, to privatize all mail delivery. It sought to abolish personal and corporate income taxes and, eventually, the "repeal of all taxation."

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some specific posts mentioning "Democracy In Chains" and/or Kochland
https://www.garlic.com/~lynn/2022g.html#37 GOP unveils 'Commitment to America'
https://www.garlic.com/~lynn/2022c.html#58 Rags-to-Riches Stories Are Actually Kind of Disturbing
https://www.garlic.com/~lynn/2022c.html#35 40 Years of the Reagan Revolution's Libertarian Experiment Have Brought Us Crisis & Chaos
https://www.garlic.com/~lynn/2021g.html#40 Why do people hate universal health care? It turns out -- they don't
https://www.garlic.com/~lynn/2021.html#27 We must stop calling Trump's enablers 'conservative.' They are the radical right
https://www.garlic.com/~lynn/2021.html#20 Trickle Down Economics Started it All
https://www.garlic.com/~lynn/2020.html#14 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2020.html#5 Book: Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America

--
virtualization experience starting Jan1968, online at home since Mar1970

The China Debt Trap Lie that Won't Die

From: Lynn Wheeler <lynn@garlic.com>
Subject: The China Debt Trap Lie that Won't Die
Date: 26 June, 2023
Blog: Facebook
The China Debt Trap Lie that Won't Die
https://www.nakedcapitalism.com/2023/06/the-china-debt-trap-lie-that-wont-die.html

some posts mentioning "China Debt Trap"
https://www.garlic.com/~lynn/2022g.html#50 US Debt Vultures Prey on Countries in Economic Distress
https://www.garlic.com/~lynn/2022e.html#76 Washington Doubles Down on Hyper-Hypocrisy After Accusing China of Using Debt to "Trap" Latin American Countries
https://www.garlic.com/~lynn/2021k.html#71 MI6 boss warns of China 'debt traps and data traps'
https://www.garlic.com/~lynn/2019.html#13 China's African debt-trap ... and US Version

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
fascism posts
https://www.garlic.com/~lynn/submisc.html#fascism

other recent posts that mention "US Debt Trap" (& Economic Hit Man)
https://www.garlic.com/~lynn/2023b.html#95 The Big Con
https://www.garlic.com/~lynn/2023b.html#90 Savage Capitalism. From Climate Change to Bank Failures to War
https://www.garlic.com/~lynn/2023b.html#43 Silicon Valley Bank collapses after failing to raise capital
https://www.garlic.com/~lynn/2023.html#17 Gangsters of Capitalism
https://www.garlic.com/~lynn/2022g.html#19 no, Socialism and Fascism Are Not the Same
https://www.garlic.com/~lynn/2022g.html#12 The Destiny of Civilization: Michael Hudson on Finance Capitalism
https://www.garlic.com/~lynn/2022f.html#76 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#65 Gangsters of Capitalism
https://www.garlic.com/~lynn/2022e.html#41 Wall Street's Plot to Seize the White House
https://www.garlic.com/~lynn/2022c.html#88 How the Ukraine War - and COVID-19 - is Affecting Inflation and Supply Chains
https://www.garlic.com/~lynn/2022b.html#104 Why Nixon's Prediction About Putin and Ukraine Matters
https://www.garlic.com/~lynn/2021k.html#21 Obama's Failure to Adequately Respond to the 2008 Crisis Still Haunts American Politics
https://www.garlic.com/~lynn/2021i.html#97 The End of World Bank's "Doing Business Report": A Landmark Victory for People & Planet
https://www.garlic.com/~lynn/2021i.html#33 Afghanistan's Corruption Was Made in America
https://www.garlic.com/~lynn/2021h.html#29 More than a Decade After the Volcker Rule Purported to Outlaw It, JPMorgan Chase Still Owns a Hedge Fund
https://www.garlic.com/~lynn/2021f.html#34 Obama Was Always in Wall Street's Corner
https://www.garlic.com/~lynn/2021f.html#26 Why We Need to Democratize Wealth: the U.S. Capitalist Model Breeds Selfishness and Resentment
https://www.garlic.com/~lynn/2021e.html#97 How capitalism is reshaping cities
https://www.garlic.com/~lynn/2021e.html#71 Bill Black: The Best Way to Rob a Bank Is to Own One (Part 1/9)
https://www.garlic.com/~lynn/2021d.html#75 The "Innocence" of Early Capitalism is Another Fantastical Myth
https://www.garlic.com/~lynn/2019e.html#106 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#92 OT, "new" Heinlein book
https://www.garlic.com/~lynn/2019e.html#38 World Bank, Dictatorship and the Amazon
https://www.garlic.com/~lynn/2019e.html#18 Before the First Shots Are Fired
https://www.garlic.com/~lynn/2019d.html#79 Bretton Woods Institutions: Enforcers, Not Saviours?
https://www.garlic.com/~lynn/2019d.html#54 Global Warming and U.S. National Security Diplomacy
https://www.garlic.com/~lynn/2019d.html#52 The global economy is broken, it must work for people, not vice versa
https://www.garlic.com/~lynn/2019c.html#40 When Dead Companies Don't Die - Welcome To The Fat, Slow World
https://www.garlic.com/~lynn/2019c.html#36 Is America A Christian Nation?
https://www.garlic.com/~lynn/2019c.html#17 Family of Secrets
https://www.garlic.com/~lynn/2019.html#85 LUsers
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#43 Billionaire warlords: Why the future is medieval
https://www.garlic.com/~lynn/2019.html#42 Army Special Operations Forces Unconventional Warfare
https://www.garlic.com/~lynn/2019.html#41 Family of Secrets

--
virtualization experience starting Jan1968, online at home since Mar1970

AI Scale-up

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: AI Scale-up
Date: 27 June, 2023
Blog: Facebook
trivia: my wife was in the gburg JES group and one of the catchers for ASP/JES3 from the west coast. Then she was con'ed into going to POK to be responsible for loosely-coupled architecture (mainframe for cluster), authoring Peer-Coupled Shared Data. She didn't remain long because 1) sporadic battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (except for IMS hot-standby until SYSPLEX and parallel SYSPLEX). She has story asking Vern Watts, who was he going to ask permission to do IMS hot-standby?, he replies nobody, he would just do it and tell them when he was all done. One of his problems was while IMS hot-standby fell over in minutes ... it could take VTAM well over an hour to re-establish the sessions for a large online configuration.

peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

The last product we did at IBM was HA/CMP ... it started out HA/6000 for the NYTimes to migrate their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP (High Availability Cluster Multi-Processing) when start doing technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with RDBMS vendors (that had VAXCluster in same source base with UNIX ... I do an API with VAXCluster semantics to ease the port). I'm then asked to write a section for the corporate continuous availability strategy document ... but it gets pulled with both Rochester (AS/400) and POK (mainframe) complain they didn't meet the requirements. Shortly after a JAN1992 meeting with Oracle CEO Ellison, cluster scale-up is transferred for announce as IBM supercomputer (technical/scientific *ONLY*) and we are told we couldn't work on anything with more than four processors (we leave IBM a few months later)

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Continuous availability posts
https://www.garlic.com/~lynn/submain.html#available

Prior to that in 1988, branch office asked if I could help (national lab) LLNL standardize some serial stuff they are playing with, which quickly becomes Fibre Channel Standard (including some stuff I had done in 1980) ... FCS, initially 1gbit full-duplex, 2gbit aggregate (200mbytes/sec). Then in 1990, POK announces some of their serial stuff as ESCON with ES/9000 (when it is already obsolete, 17mbytes/sec; trivia: in 1980 POK had gotten some of my 1980 stuff vetoed for release). Then POK starts playing with FCS and defines a heavy-weight protocol that drastically reduces the throughput, which is eventually announced as FICON. The most recent published benchmark I can find is the z196 "PEAK I/O" that got 2M IOPS with 104 FICON (running over 104 FCS). About the same time, a FCS was announced for E5-2600 blade claiming over million IOPS (two such FCS getting higher throughput than 104 FICON running over 104 FCS).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

Note industry standard processor benchmark (number of interations compared to 370/158 assumed to be 1MIPS) has 50 BIPS for max. configured z196 compared to 500 BIPS (ten times) for E5-2600 blade. The E5-2600 blades were standard for large cloud operation that would have a score or more megadatacenters around the world, each megadatacenter would have half-million or more E5-2600 blades. This was before IBM sold off its server business and had base list price for E5-2600 blade of $1815 (E5-2600: $3.60/BIPS compared to $30M max configured z196 at $600,000/BIPS). Other trivia: for a couple decades, large cloud operations have claimed they assemble their own blades at 1/3rd the cost of brand name blades .. or about $1.20/BIPS ... say $605/blade or slightly over $30M for 500,000 blade megadatacenter for 250,000 TIPS (250M BIPS) compared to $30M for max-configured z196 @50BIPS. More than ten years later, current blades are significantly more powerful and can be assembled with even more powerful GPUs for the AI market.

2013 blade article
https://www.bladesmadesimple.com/2013/01/siphotonics/
2023 IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
https://www.hpcwire.com/2023/02/08/ibm-introduces-vela-cloud-ai-supercomputer-powered-by-intel-nvidia/
a couple previous posts
https://www.garlic.com/~lynn/2023.html#100 IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
https://www.garlic.com/~lynn/2023.html#105 IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

AI Scale-up

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: AI Scale-up
Date: 29 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up

more about IBM's AI supercomputer in the cloud: "Why we built an AI supercomputer in the cloud. Introducing Vela, IBM's first AI-optimized, cloud-native supercomputer."
https://research.ibm.com/blog/AI-supercomputer-Vela-GPU-cluster

This led us to build IBM's first AI-optimized, cloud-native supercomputer, Vela. It has been online since May of 2022, housed within IBM Cloud, and is currently just for use by the IBM Research community. The choices we've made with this design give us the flexibility to scale up at will and readily deploy similar infrastructure into any IBM Cloud data center across the globe. Vela is now our go-to environment for IBM Researchers creating our most advanced AI capabilities, including our work on foundation models and is where we collaborate with partners to train models of many kinds.

... aka using Intel & Nvidia

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

wallpaper updater

From: Lynn Wheeler <lynn@garlic.com>
Subject: wallpaper updater
Date: 30 June, 2023
Blog: Facebook
Have had a shell script to update screen window every couple minutes from large file of randomized list of jpg files ... and it was something of hack (from count of reads / window updates)

Today took a little while to get displacement from open file list of jpg files after each read/update ... and use it for restart (20 updates/hr, 200 or so per day, power-off/power-on) to reset open file displacement on restart (all from shell script).

--
virtualization experience starting Jan1968, online at home since Mar1970

wallpaper updater

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: wallpaper updater
Date: 30 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#45 wallpaper updater

... just 20 line script for personal PC ... before it was checking count of lines read (list of 33,000 jpg files) and restart was removing previous read lines from file. loop was simple "while read". This just re-opens file and updates displacement (until cycle through the whole file, a couple months).

I haven't being any real webserver availability work since early 90s when brought in as consultant to small client/server startup (mosaic, then renamed netscape) that wanted to do payment transactions on the servers (they had also invented this technology they called "SSL" they wanted to do). I had to do the payment network gateways and protocol between gateways and webservers (over the internet). Lots of physically redundant gateways at multiple locations and lots of internet paths to major centers. Payment network trouble desks required 5min 1st level problem determination (leased, some dialed lines). The first webserver problem was diagnosed for 3hrs and then closed as NTF. I had to do a lot of software and documentation to meet payment network trouble desk requirement. It is sometimes now called "electronic commerc

payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

This was after the "scale-up" work for our HA/CMP product was transferred for announce as IBM supercomputer and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Turns out two of the Oracle people we had worked with on HA/CMP scale-up ... were at the startup responsible for something called "commerce server".

Postel
https://en.wikipedia.org/wiki/Jon_Postel
also sponsored my talk at ISI/USC on "why the internet isn't business critical dataprocessing" that was based on the compensating software&instructions I had to do for "electronic commerce" at MOSAIC.

availability posts
https://www.garlic.com/~lynn/submain.html#availability

--
virtualization experience starting Jan1968, online at home since Mar1970

AI Scale-up

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: AI Scale-up
Date: 29 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#44 AI Scale-up

earlier numbers, industry standard benchmark, number of iterations compared to 370/158, assumed to be 1MIP ... then just extrapolate from previous machines


z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019

• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)

z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022

• pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)



IBM z16tm puts innovation to work while unlocking the potential of your hybrid cloud transformation
https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/ENUS122-001/index.html

The largest IBM z16 is expected to provide approximately 17% more total system capacity as compared to the largest IBM z15 with some variation based on workload and configuration.

... snip ...

aka z16 BIPS is still less than half of of 2010 E5-2600 blade that had IBM base list price of $1815.

some past posts mentioning z196 benchmarks & comparisons
https://www.garlic.com/~lynn/2023d.html#38 IBM 3278
https://www.garlic.com/~lynn/2022h.html#113 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#98 Mainframe Cloud
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#12 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022b.html#57 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#45 Mainframe MIPS
https://www.garlic.com/~lynn/2022.html#96 370/195
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#56 IBM and Cloud Computing
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2021i.html#2 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#44 OoO S/360 descendants
https://www.garlic.com/~lynn/2021f.html#41 IBM Mainframe
https://www.garlic.com/~lynn/2021f.html#18 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#68 Amdahl
https://www.garlic.com/~lynn/2021d.html#55 Cloud Computing
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?

--
virtualization experience starting Jan1968, online at home since Mar1970

UNMITIGATED RISK

From: Lynn Wheeler <lynn@garlic.com>
Subject: UNMITIGATED RISK
Date: 02 July, 2023
Blog: Facebook
UNMITIGATED RISK
https://unmitigatedrisk.com/?p=717

At the time of Sarbanes-Oxley, rhetoric in congress was that Sarbanes-Oxley would prevent future ENRONs and guarantee that executives and auditors did jailtime ... however it required SEC to do something. Possibly because GAO didn't believe SEC was doing anything, GAO started doing reports of fraudulent financial filings, even showing they increased after SOX went into effect (and nobody doing jailtime). Joke was that SOX was done as a gift to the audit industry, because congress felt badly that one of the "big five" went out of business. GAO references:
http://www.gao.gov/products/GAO-03-138
http://www.gao.gov/products/GAO-06-678
http://www.gao.gov/products/GAO-06-1053R

On July 24, 2006, we issued a report to Congress entitled, Financial Restatements: Update of Public Company Trends, Market Impacts, and Regulatory Enforcement Activities. That report included a listing of 1,390 financial restatement announcements that we identified as having been made because of financial reporting fraud and/or accounting errors between July 1, 2002, and September 30, 2005. As part of that work, Congress asked that we provide a limited update of that database for the period October 1, 2005, through June 30, 2006.

... snip ...

sarbanes-oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
financil reporting fraud
https://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud
enron posts
https://www.garlic.com/~lynn/submisc.html#enron
Risk, Fraud, Exploits, Threats, Vulnerabilities posts
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Speed Gains Erased By Modern Software

From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Speed Gains Erased By Modern Software
Date: 04 July, 2023
Blog: Facebook
Computer Speed Gains Erased By Modern Software
https://hackaday.com/2023/07/02/computer-speed-gains-erased-by-modern-software/

... old refeences about paradigm shift from fast single core chips to multi-core and parallel programming:

Intel's server chip chief knocks Itanium, Gates and AMD
https://www.theregister.com/2006/06/08/intel_gelsinger_stanford?page=2

"A couple of years ago, I had a discussion with Bill Gates (about the multi-core products)," Gelsinger said. "He was just in disbelief. He said, 'We can't write software to keep up with that.'"

Gates called for Intel to just keep making its chips faster. "No, Bill, it's not going to work that way," Gelsinger informed him.

The Intel executive then held his tongue. "I'm won't say anything derogatory about how they design their software, but . . . " Laughter all around.

VMware co-founder Mendel Rosenblum was sitting in the audience at Stanford and called Gelsinger out on the Gates jab.

"As much as I like to hear someone make fun of Microsoft, Bill Gates was dead on," he said. "This is a paradigm shift. Am I going to get a magic compiler that will parallelize my code?"


... snip ...

Microsoft super sizes multi-threaded tripe
https://www.theregister.com/2007/05/01/mundie_mundie/
Is Parallel Programming Just Too Hard?
https://developers.slashdot.org/story/07/05/29/0058246/is-parallel-programming-just-too-hard

multiprocessor, tightly-coupled, smp posts
https://www.garlic.com/~lynn/subtopic.html#smp

old posts mentioning multi-core & parallel programming paradigm shift:
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#51 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2017f.html#14 Fast OODA-Loops increase Maneuverability
https://www.garlic.com/~lynn/2017e.html#52 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#48 The ICL 2900
https://www.garlic.com/~lynn/2016h.html#7 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2007p.html#55 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#39 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#38 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#28 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#25 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#6 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#3 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007n.html#1 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#70 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#61 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#59 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#58 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#54 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#53 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#52 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#51 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#49 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#39 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#37 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#29 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#26 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#22 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#19 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#14 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007m.html#5 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#63 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#60 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#42 My Dream PC -- Chip-Based
https://www.garlic.com/~lynn/2007l.html#38 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard?
https://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies

--
virtualization experience starting Jan1968, online at home since Mar1970

UNMITIGATED RISK

From: Lynn Wheeler <lynn@garlic.com>
Subject: UNMITIGATED RISK
Date: 04 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#48 UNMITIGATED RISK

we were brought in to help word smith some cal. legislation. There were some participants that were heavily into privacy issues and had done detailed public surveys, finding that the #1 issue was identity theft, especially involving fraudulent financial transactions using information from data breaches. There were three bills at the time 1) electronic signature, 2) data breach notification (the 1st one in the country), 3) opt-in personal information sharing. Now normally an entity will take security measures in self-protection, but in these breach cases the institutions weren't at risk, but the public; and there was little or nothing being done. It was hoped that the publicity from breach notifications would motivate institutions to take security measures.

Then federal (breach notification) bills started appearing that would preempt the cal legislation and would limit requirement for data breach notification, like specifying 1) institutions that had industry defined security audit certification (joke at the time, certifications were easy to get and would just be revoked after an institution had a breach) and/or 2) a specific combination of personal data was compromised in the same breach (a combination that would rarely be the case).

While the "opt-in" legislation was worked on (institutions could share personal information only when there was record of individual agreeing), a (federal preemption) opt-out section was added to GLBA (institutions could share personal information unless they had a record of individual objecting). A few years later, at a national privacy conference in Wash. DC, there was panel discussion with the FTC commissioners. Somebody in the audience asked if the FTC was going to enforce "GLBA "opt-out". He said that he worked for a company that supplied "call center" technology for the financial industry ... and none of their "1-800 opt-out" call-in lines had any facility for making a record of a opt-out call (aka there would never be an "opt-out" record blocking sharing). The question was ignored.

electronic signature posts
https://www.garlic.com/~lynn/subpubkey.html#signature
data breach notification posts
https://www.garlic.com/~lynn/submisc.html#data.breach.notification
identity theft posts
https://www.garlic.com/~lynn/submisc.html#identity.theft
Risk, Fraud, Exploits, Threats, Vulnerabilities posts
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
virtualization experience starting Jan1968, online at home since Mar1970

Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses

From: Lynn Wheeler <lynn@garlic.com>
Subject: Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses.
Date: 04 July, 2023
Blog: Facebook
Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses.
https://foreignpolicy.com/2023/07/04/do-democracies-always-deliver/
The Booming Export of Authoritarianism. Ever more governments are reaching beyond their borders to silence their critics, according to a new Freedom House report.
https://foreignpolicy.com/2022/06/02/freedom-house-report-myanmar-targeting-dissidents/

Difference Between Authoritarian and Fascism
http://www.differencebetween.net/miscellaneous/difference-between-authoritarian-and-fascism/
Authoritarian capitalism
https://en.wikipedia.org/wiki/Authoritarian_capitalism
White nationalism
https://en.wikipedia.org/wiki/White_nationalism
The Great-Granddaddy of White Nationalism
https://www.southerncultures.org/article/the-great-granddaddy-of-white-nationalism/
Democracy is a threat to white supremacy--and that is the cause of America's crisis
https://www.fordfoundation.org/news-and-stories/stories/posts/democracy-is-a-threat-to-white-supremacy-and-that-is-the-cause-of-america-s-crisis/
Understanding the Global Rise of Authoritarianism
https://fsi.stanford.edu/news/understanding-global-rise-authoritarianism

On authoritarianism in America:

In this country, you have a major political party, that has completely gone off the deep end. They're literally setting up a playbook where they can overturn the results of an election through the laws they're are passing at the state level. And if Trump does come back, which is a 50-50 proposition, he's clearly going to run and it will be another 50-50 election, right? And even if he loses, maybe they'll succeed this time overturning the result. They will start from such a more advanced authoritarian position than even in 2016 when he was elected.


... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

--
virtualization experience starting Jan1968, online at home since Mar1970

AI Scale-up

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI Scale-up
Date: 29 June, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#44 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#47 AI Scale-up

note 2010 Intel 500BIPS E5-2600 blade, two chip (@8core), 16core blade (compared 2010 50BIPS IBM max configured z196 mainframe)

2018; Intel Goes For 48-Cores: Cascade-AP with Multi-Chip Package Coming Soon
https://www.anandtech.com/show/13535/intel-goes-for-48cores-cascade-ap
2020, 2nd gen; New Intel Cascade Lake Xeon 'Performance' CPUs: Cutting Prices Too
https://www.anandtech.com/show/15542/intel-updates-2nd-gen-xeon-scalable-family-with-new-skus

Why we built an AI supercomputer in the cloud
https://research.ibm.com/blog/AI-supercomputer-Vela-GPU-cluster
IBM cloud AI superecomputer using 2nd generation cascade lake
https://www.hpcwire.com/2023/02/08/ibm-introduces-vela-cloud-ai-supercomputer-powered-by-intel-nvidia/

Intel 4th Gen Xeon Benchmarks: Sapphire Rapids Accelerators Revealed
https://hothardware.com/news/intel-new-rialto-bridge-falcon-shores-supercomputing-firepower

As evinced by its event at ISC 2022, Intel seems to be seeking to serve these customers with all sorts of customizable super-computing hardware. The company gave the first performance data, vague and relative as it is, for its upcoming Sapphire Rapids-family Xeon Scalable processors with on-package High-Bandwidth Memory (HBM), and it also talked about a couple of upcoming products: Rialto Bridge (the successor to Ponte Vecchio) and the Falcon Shores "XPU".

... snip

4th Generation Intel® Xeon® Scalable Processors
https://ark.intel.com/content/www/us/en/ark/products/series/228622/4th-generation-intel-xeon-scalable-processors.html
Intel® Xeon® Platinum 8490H Processor, Launched Q1'23, 60 cores

Intel Claims Sapphire Rapids up to 7X Faster Than AMD EPYC Genoa in AI and Other Workloads
https://www.tomshardware.com/news/intel-claims-sapphire-rapids-up-to-7x-faster-than-amd-epyc-genoa-in-ai-and-other-workloads
Intel 4th Gen Xeon Scalable Sapphire Rapids Performance Review
https://hothardware.com/reviews/intel-4th-gen-xeon-scalable-performance-review
Red Hat Enterprise Linux achieves significant performance gains with Intel's 4th Generation Xeon Scalable Processors
https://www.redhat.com/en/blog/red-hat-enterprise-linux-achieves-significant-performance-gains-intels-4th-generation-xeon-scalable-processors
Unlocking Performance: SingleStore's TPC-H Benchmark on 4th Gen Intel® Xeon® Processors in AWS
https://www.singlestore.com/blog/unlocking-performance-singlestore-s-tpc-h-benchmark-on-intel-sapphire-rapids-in-aws/
Sapphire Rapids
https://en.wikipedia.org/wiki/Sapphire_Rapids

--
virtualization experience starting Jan1968, online at home since Mar1970

Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses

From: Lynn Wheeler <lynn@garlic.com>
Subject: Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses.
Date: 05 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#51 Do Democracies Always Deliver? As authoritarian capitalism gains credibility, free societies must overcome their internal weaknesses.

The king of dark money effectively controls the US supreme court now
https://www.theguardian.com/commentisfree/2023/jun/30/supreme-court-leonard-leo-dark-money
US supreme court 'creeping dangerously towards authoritarianism', AOC says
https://www.theguardian.com/law/2023/jul/02/aoc-conservative-supreme-court-authoritarianism

The conservative supreme court is "creeping dangerously towards authoritarianism", the Democratic congresswoman Alexandria Ocasio-Cortez said on Sunday, raising again the unlikely scenario of impeaching justices for recent actions.

... snip ...

"Should There Really Be a Supreme Court? Its Role Always Has Been Anti-Democratic"
https://www.nakedcapitalism.com/2023/07/should-there-really-be-a-supreme-court-its-role-always-has-been-anti-democratic.html

past "Railroaded" aritcle/book
http://phys.org/news/2012-01-railroad-hyperbole-echoes-dot-com-frenzy.html
and
https://www.amazon.com/Railroaded-Transcontinentals-Making-America-ebook/dp/B0051GST1U
pg77/pg1984-86:

By the end of the summer of 1873 the western railroads had, within the span of two years, ended the Indian treaty system in the United States, brought down a Canadian government, and nearly paralyzed the U.S. Congress. The greatest blow remained to be delivered. The railroads were about to bring down the North American economy.


pg510/loc10030-33:

The result was not only unneeded railroads whose effects were as often bad as beneficial but also corruption of the markets and the government. The men who directed this capital were frequently not themselves capitalists. They were entrepreneurs who borrowed money or collected subsidies. These entrepreneurs did not invent the railroad, but they were inventing corporations, railroad systems, and new forms of competition. Those things yielded both personal wealth and social disasters


pg515/loc10118-22:

The need to invest capital and labor in large amounts to maintain and upgrade what had already been built was one debt owed to the past, but the second one was what Charles Francis Adams in his days as a reformer referred to as a tax on trade. All of the watered stock, money siphoned off into private pockets, waste, and fraud that characterized the building of the railroads created a corporate debt that had to be paid through higher rates and scrimping on service. A shipper in 1885 was still paying for the frauds of the 1860s.

... snip ...

The Supreme Court Has Never Been Apolitical. Many today fear the court is becoming just another partisan institution. But, in the past, justices sought elective office and counseled partisan allies. Some even coveted the White House themselves.
https://www.politico.com/news/magazine/2022/04/03/the-supreme-court-has-never-been-apolitical-00022482

But maybe that's not a bad thing. You can't address a problem until you acknowledge it exists. We have pretended over the past 50 years that the Supreme Court is an apolitical institution. It never really was, and it isn't today.

... snip ...

In the 1880s, Supreme Court were scammed (by the railroads) to give corporations "person rights" under the 14th amendment.
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiii/loc45-50:

IN DECEMBER 1882, ROSCOE CONKLING, A FORMER SENATOR and close confidant of President Chester Arthur, appeared before the justices of the Supreme Court of the United States to argue that corporations like his client, the Southern Pacific Railroad Company, were entitled to equal rights under the Fourteenth Amendment. Although that provision of the Constitution said that no state shall "deprive any person of life, liberty, or property, without due process of law" or "deny to any person within its jurisdiction the equal protection of the laws," Conkling insisted the amendment's drafters intended to cover business corporations too.

... testimony falsely claiming authors of 14th amendment intended to include corporations; pgxiv/loc74-78:

Between 1868, when the amendment was ratified, and 1912, when a scholar set out to identify every Fourteenth Amendment case heard by the Supreme Court, the justices decided 28 cases dealing with the rights of African Americans--and an astonishing 312 cases dealing with the rights of corporations.


pg36/loc726-28:

On this issue, Hamiltonians were corporationalists--proponents of corporate enterprise who advocated for expansive constitutional rights for business. Jeffersonians, meanwhile, were populists--opponents of corporate power who sought to limit corporate rights in the name of the people.


pg229/loc3667-68:

IN THE TWENTIETH CENTURY, CORPORATIONS WON LIBERTY RIGHTS, SUCH AS FREEDOM OF SPEECH AND RELIGION, WITH THE HELP OF ORGANIZATIONS LIKE THE CHAMBER OF COMMERCE.

... snip ...

False Profits: Reviving the Corporation's Public Purpose
https://www.uclalawreview.org/false-profits-reviving-the-corporations-public-purpose/

I Origins of the Corporation. Although the corporate structure dates back as far as the Greek and Roman Empires, characteristics of the modern corporation began to appear in England in the mid-thirteenth century.[4] "Merchant guilds" were loose organizations of merchants "governed through a council somewhat akin to a board of directors," and organized to "achieve a common purpose"[5] that was public in nature. Indeed, merchant guilds registered with the state and were approved only if they were "serving national purposes."[6]

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

posts mentioning railroaded, conkling, jeffersonian, fourteenth amendment
https://www.garlic.com/~lynn/2023.html#78 The Progenitor of Inequalities - Corporate Personhood vs. Human Beings
https://www.garlic.com/~lynn/2022h.html#120 IBM Controlling the Market
https://www.garlic.com/~lynn/2022f.html#121 We need to rebuild a legal system where corporations owe duties to the general public
https://www.garlic.com/~lynn/2022d.html#88 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#39 The Supreme Court's History of Protecting the Powerful
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022c.html#97 Why Companies Are Becoming B Corporations
https://www.garlic.com/~lynn/2022c.html#36 The Supreme Court Has Never Been Apolitical
https://www.garlic.com/~lynn/2021k.html#90 Gasoline costs more these days, but price spikes have a long history and happen for a host of reasons
https://www.garlic.com/~lynn/2021k.html#7 The COVID Supply Chain Breakdown Can Be Traced to Capitalist Globalization
https://www.garlic.com/~lynn/2021j.html#53 West from Appomattox: The Reconstruction of America after the Civil War
https://www.garlic.com/~lynn/2021j.html#46 Why Native Americans are buying back land that was stolen from them
https://www.garlic.com/~lynn/2021h.html#28 Massive infrastructure spending has a dark side. Yes, there is such a thing as dumb growth
https://www.garlic.com/~lynn/2021f.html#70 The Rise and Fall of an American Tech Giant
https://www.garlic.com/~lynn/2021f.html#46 Under God
https://www.garlic.com/~lynn/2021f.html#27 Why We Need to Democratize Wealth: the U.S. Capitalist Model Breeds Selfishness and Resentment
https://www.garlic.com/~lynn/2021e.html#36 How the Billionaires Corporate News Media Have Been Used to Brainwash Us
https://www.garlic.com/~lynn/2019e.html#144 PayPal, Western Union Named & Shamed for Overcharging the Most on Money Transfers to Mexico
https://www.garlic.com/~lynn/2019c.html#75 Packard Bell/Apple
https://www.garlic.com/~lynn/2019b.html#71 IBM revenue has fallen for 20 quarters -- but it used to run its business very differently
https://www.garlic.com/~lynn/2019b.html#47 Union Pacific Announces 150th Anniversary Celebration Commemorating Transcontinental Railroad's Completion
https://www.garlic.com/~lynn/2019.html#60 Grant (& Conkling)
https://www.garlic.com/~lynn/2018f.html#78 A Short History Of Corporations
https://www.garlic.com/~lynn/2018c.html#94 Barb
https://www.garlic.com/~lynn/2018c.html#52 We the Corporations: How American Businesses Won Their Civil Rights

--
virtualization experience starting Jan1968, online at home since Mar1970

AI Scale-up

From: Lynn Wheeler <lynn@garlic.com>
Subject: AI Scale-up
Date: 06 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#44 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#47 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#52 AI Scale-up

Startup Builds Supercomputer with 22,000 Nvidia's H100 Compute GPUs. The world's second highest performing supercomputer.
https://www.tomshardware.com/news/startup-builds-supercomputer-with-22000-nvidias-h100-compute-gpus

A cluster powered by 22,000 Nvidia H100 compute GPUs is theoretically capable of 1.474 exaflops of FP64 performance -- that's using the Tensor cores. With general FP64 code running on the CUDA cores, the peak throughput is only half as high: 0.737 FP64 exaflops. Meanwhile, the world's fastest supercomputer, Frontier, has peak compute performance of 1.813 FP64 exaflops (double that to 3.626 exaflops for matrix operations). That puts the planned new computer at second place for now, though it may drop to fourth after El Capitan and Aurora come fully online.

... snip ...

AMD's MI300 APUs Power Exascale 'El Capitan' Supercomputer
https://www.tomshardware.com/news/amds-mi300-apus-power-exascale-el-capitan-supercomputer
2 ExaFLOPS Aurora Supercomputer Is Ready: Intel Max Series CPUs and GPUs Inside
https://www.tomshardware.com/news/2-exaflops-aurora-supercomputer-is-ready

--
virtualization experience starting Jan1968, online at home since Mar1970

How the Net Was Won

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How the Net Was Won
Date: 07 July, 2023
Blog: Facebook
How the Net Was Won
https://heritage.umich.edu/stories/how-the-net-was-won/

Early 80s, I had HSDT project, T1 and faster computer links; was also working with NSF director and was suppose get $20M to interconnect the NSF supercomputer centers. Then congress cuts the budget, some other things happen and finally an RFP was released (calling for T1, in part based on what we already had running) ... Preliminary announce (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

Note: winning bid put in 440kbit/sec links and then possibly to claim T1 network, they put in T1 trunks with telco multiplexor running multiple 440kbit links per trunk (I periodically ridiculed that why couldn't they claim T3 or T5 network since at some point the T1 trunks could be multiplexed over T3 or even T5 trunks).

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
online commputer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

Co-worker at Cambrdige Scinece Center and than San Jose Research,
https://en.wikipedia.org/wiki/Edson_Hendricks

responsible for the internal network; larger than ARPANET/Internet from just about the beginning until sometime mid/late 80s ... technology also used for the corporate sponsored univ. BITNET (also larger than Internet for a time)
https://en.wikipedia.org/wiki/BITNET
CSNET and BITNET merge in 1989
https://en.wikipedia.org/wiki/Corporation_for_Research_and_Educational_Networking

Edson got shutdown trying to move to tcp/ip
https://en.wikipedia.org/wiki/Edson_Hendricks
SJMN news article gone 404, but lives on at wayback machine
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
some more Ed from his web pages (at wayback machine)
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
long-winded (linkedin) post
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

There were some claims that all the AUP (acceptable use policies) were because there were lots of "telco" free donations (in NSFNET case, something like four times the winning bid). Telcos had huge amount of dark fiber, but they had large fixed casts and their major revenue was based on bit-use charges ... and stuck in chicken and egg situation. There weren't the applications to make use of all the dark fiber, to promote the development/evolution of the bandwidth hungry applications, they would have to radically reduce their bit-use charging ... and go through several years of operating at a loss. Donating significant dark fiber to non-commercial use only, promotes the development of the bandwidth hungry apps, while not affecting their commercial revenue stream.

some past posts mentioning dark fiber for non-commercial use
https://www.garlic.com/~lynn/2014.html#3 We need to talk about TED
https://www.garlic.com/~lynn/2012j.html#89 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012j.html#88 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2010e.html#64 LPARs: More or Less?
https://www.garlic.com/~lynn/2006j.html#45 Arpa address
https://www.garlic.com/~lynn/2004l.html#1 Xah Lee's Unixism

... old archived directory

INDEX FOR NSFNET Policies and Procedures

3 Jun 93

This directory contains information about the policies and procedures established by the National Science Foundation Network (NSFNET) and its associated networks. These documents were collected by the NSF Network Service Center (NNSC). With thanks to the NNSC and Bolt Berenek and Newman, Inc., they are now available by anonymous FTP from InterNIC Directory and Database Services on ds.internic.net.

README MRNet Brief.txt Template-albany.edu Template-arizona.edu Template-auburn.edu Template-auburnschl.edu Template-bates.edu Template-bowdoin.edu Template-brown.edu Template-bu.edu Template-bvc.edu Template-carleton.edu Template-cba.uga.edu Template-ceu.edu Template-cookman.edu Template-ctstateu.edu Template-dartmouth.edu Template-eiu.edu Template-exploratorium.edu Template-hamptonu.edu Template-iastate.edu Template-macalstr.edu Template-mccc.edu Template-miu.edu Template-mnsmc.edu Template-muskingum.edu Template-mwc.edu Template-ncsu.edu Template-nevadanet Template-niu.edu Template-noao.edu Template-provo.edu Template-ricks.edu Template-rpi.edu Template-sl.edu Template-snc.edu Template-spc.edu Template-spu.edu Template-sru.bitnet Template-stmarys-ca.edu Template-suffolk.edu Template-susqu.edu Template-tarleton.edu Template-trinity.edu Template-twu.edu Template-ua.edu Template-uidaho.edu Template-uiowa.edu Template-umass.edu Template-unf.edu Template-uoregon.edu Template-uwgb.edu Template-vims.edu Template-westga.edu Template-wlu.edu Template-wofford.edu Template-wooster.edu Template.umsl.edu Template.uncecs.edu [mail]mail.gateway [mail]ua-standards [uofa.commandments]commandments.version-2[obsolete] [uofa]assigned-numbers-and-names ans.policy barrnet.policy bowdoin-computer-use-policy cerfnet.policy cicnet.policy cren.policy dartmouth-computing-code eiu-policy farnet.policy fricc.policy indnet.policy jvnc.policy los-nettos.policy ls-lsR michnet.policy mnsmc-policy mrnet.0README mrnet.policy ncsanet.policy nearnet.policy netpolicy.src nevadanet.policy northwestnet.policy nsfnet.policy nysernet.policy oarnet.policy onet.policy prepnet.policy rpi-rcs-conditions-of-use spu-internet-user-guide statement-of-computer-ethics uiowa-internet-man-page uninet.policy uonet-access uonet-appendixes-glossary uonet-dos-ps-network-workstation uonet-guidelines uonet-mac-network-workstation uonet-networked-unix-workstations uonet-users-guide uonet-vax-vmx-network-software usenet-news-policy widener-student-computing-resource-policy


... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

How the Net Was Won

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How the Net Was Won
Date: 07 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won

some government agencies were mandating the elimination of internet and tcp/ip ... and only use of (OSI) "GOSIP".
https://en.wikipedia.org/wiki/Government_Open_Systems_Interconnection_Profile
At interop '88 ... it seemed like 2/3rds of the booths had OSI products.
https://historyofcomputercommunications.info/section/14.11/interop-(tcp-ip)-trade-show-september/
https://www.computerhistory.org/collections/catalog/102653208 trivia:
over the weekend (before show open), floor nets were going down (with packet floods) ... finally diagnosed ... and item on the problem included in RFC1122
https://www.rfc-editor.org/info/rfc1122

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

LAN/MAC trivia: I was on XTP TAB (corporate communication group fought hard to block). there were some number of gov/military types involved with XTP (needed standardized). So (somewhat for them) XTP was taken to ISO chartered ANSI group responsible for OSI level 3&4 (network&transport) standards (X3S3.3) as "HSP". Eventually they said they couldn't do it because there was ISO directive that they could only standardize protocols that conformed to the OSI model. XTP didn't conform to OSI model because 1) it supported internetworking protocol (non-existent between level 3&4), 2) went directly to LAN MAC interface (non-existent some where in middle of OSI level "3") and 3) skipped the level 3/4 interface (directly from transport to LAN MAC).

XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

For the T3 NSFNET upgrade, I was asked to be the red team (I guess they thought that it could shutdown my ridiculing) and there were a couple dozen people from half dozen labs on the blue team. At the executive review, I presented first and then five minutes into the blue team presentation, the executive pounded on the table and said he would lay down in front of a garbage truck before allowing any but the blue team proposal to go forward (I and a few others get up and leave).

nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

Last product we did at IBM was HA/CMP ... started out HA/6000 for NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000; I rename it HA/CMP (High Availability Cluster Multi-Processing) when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors ... then end of JAN1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*), we leave IBM a few months later.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Not long later brought in as consultants to small client/server startup. Two of the former Oracle people (we worked with on cluster scale-up) are there responsible for something called "commerce server" and want to do payment transactions on server. The startup had also done this technology they called "SSL" they want to use, it is now frequently called "electronic commerce". I have responsibility for gateways to the payment networks and protocol between webservers and the gateways. Later, Postel
https://en.wikipedia.org/wiki/Jon_Postel
sponsors my talk on "Why The Internet Isn't Business Critical Dataprocessing" based on all the stuff I had to do to support electronic commerce.

payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

How the Net Was Won

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: How the Net Was Won
Date: 08 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won

SJR got first corporate CSNET gateway
https://en.wikipedia.org/wiki/CSNET
fall of 1982 ... some old email
https://www.garlic.com/~lynn/98.html#email821022
https://www.garlic.com/~lynn/2002p.html#email821122
https://www.garlic.com/~lynn/2008s.html#email821204
CSNET mentions trouble with conversion to internetworking & tcp/ip
https://www.garlic.com/~lynn/2002p.html#email830109
https://www.garlic.com/~lynn/2000e.html#email830202

great conversion of ARPANET HOST protocol to TCP/IP internetworking, 1JAN1983 (for approx 100 IMPS and 255 hosts); at the time the internal network was rapidly approaching 1000 nodes ... old post with list of corporate locations that added one or more nodes during 1983
https://www.garlic.com/~lynn/2006k.html#8

then right after interop-88 got 9-net ... old email
https://www.garlic.com/~lynn/2006j.html#email881216

... big differences between internet and corporate network growth was internet addition of PCs and workstations as network nodes. By comparison the corporate network 1) that all links (leaving physical locations) were required to be encrypted ... lots of gov. resistance especially when links crossed national boundaries and 2) communication group strongly resisted anything but mainframe hosts as network nodes (all 3270 dumb terminal emulation), and eventually managing to convert the corporate network to SNA/VTAM ... rather than TCP/IP. The communication group was spreading all sorts of misinformation internally ... including claims that SNA/VTAM could be used for NSFNET; somebody gathered a lot of the misinformation email and forwarded to us ... heavily clipped and redacted (to protect the guilty):
https://www.garlic.com/~lynn/2006w.html#email870109

The communication group had also been heavily fighting the release of mainframe TCP/IP ... but apparently some influential univs. got that reversed. Then the communication group said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got 44kbytes/sec aggregate using nearly whole IBM 3090 processor. I then did the enhancements to support RFC1044 and in some tuning tests at Cray Research between IBM 4341 and Cray, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

misc internet/webserver teething ...

TCP session has minimum of 7 packets exchange for reliable transmission (XTP had defined a reliable transmission done in minimum of 3 packet exchange). HTTP implemented reliable transaction operation using TCP sessions ... resulted in big performance problem. Most of the common webserver deployments were on platforms using BSD TCP/IP (tahoe/reno) which had session close with linear search of FINWAIT list ... apparently assuming no more than a few entries. However, as WEBSERVER/HTTP load increased, FINWAIT lists was running into thousands and 95% of platform CPU was being spent running the FINWAIT list. NETSCAPE was adding increasing number of overloaded server platforms when they eventually installed a large SEQUENT multiprocessor where they had fixed the problem for large FINWAIT lists in DYNIX some time before. It was another six months before started seeing other server vendors deploying FINWAIT scan fix for their platforms.

An early major "electronic commerce" server was by a sports company that was advertising during weekend national football games half-time (and expecting big uptic in weekend webserver activity). However, major Internet ISPs were still doing rolling weekend router maintenance downtime (w/increasing likelihood that they would be offline during expected peak activity). I taught some classes to the browser implementers that included examples of multiple-A record support from tahoe/reno clients ... but they would say that was too hard (I joke that if it wasn't in Steven's text book, they weren't going to do it, it took another year to get multiple-A record support in the browser).

Little later, GOOGLE was seeing increasing loads and were installing increasing number of servers and attempting load balancing ... eventually modifying internet boundary CISCO routers (w/T3 links) to track backend server loads and explicitly doing dynamic transaction load-balancing routing to backend servers.

NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

some posts mentioning multiple-A record and/or FINWAIT
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022f.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#80 OSI Model
https://www.garlic.com/~lynn/2021h.html#86 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021f.html#29 Quic gives the internet's data transmission foundation a needed speedup
https://www.garlic.com/~lynn/2019b.html#66 IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud Hits Air Pocket
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2018d.html#63 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2018.html#106 Predicting the future in five years as seen from 1983
https://www.garlic.com/~lynn/2018.html#105 Predicting the future in five years as seen from 1983
https://www.garlic.com/~lynn/2017i.html#45 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017h.html#47 Aug. 9, 1995: When the Future Looked Bright for Netscape
https://www.garlic.com/~lynn/2017c.html#54 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#52 The ICL 2900
https://www.garlic.com/~lynn/2017b.html#21 Pre-internet email and usenet (was Re: How to choose the best news server for this newsgroup in 40tude Dialog?)
https://www.garlic.com/~lynn/2016g.html#49 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016e.html#127 Early Networking
https://www.garlic.com/~lynn/2016e.html#43 How the internet was invented
https://www.garlic.com/~lynn/2015h.html#113 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015g.html#96 TCP joke
https://www.garlic.com/~lynn/2015f.html#71 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#25 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2015e.html#24 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2015d.html#50 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#45 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#23 Commentary--time to build a more secure Internet
https://www.garlic.com/~lynn/2015d.html#2 Knowledge Center Outage May 3rd
https://www.garlic.com/~lynn/2014j.html#76 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014h.html#26 There Is Still Hope
https://www.garlic.com/~lynn/2014g.html#13 Is it time for a revolution to replace TLS?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013n.html#29 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013i.html#48 Google takes on Internet Standards with TCP Proposals, SPDY standardization
https://www.garlic.com/~lynn/2013i.html#46 OT: "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2013h.html#8 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#83 What Makes an Architecture Bizarre?

--
virtualization experience starting Jan1968, online at home since Mar1970

The United States' Financial Quandary: ZIRP's Only Exit Path Is a Crash

From: Lynn Wheeler <lynn@garlic.com>
Subject: The United States' Financial Quandary: ZIRP's Only Exit Path Is a Crash
Date: 08 July, 2023
Blog: Facebook
The United States' Financial Quandary: ZIRP's Only Exit Path Is a Crash
https://www.nakedcapitalism.com/2023/07/the-united-states-financial-quandary-zirps-only-exit-path-is-a-crash.html

The key message of Michael Hudson's piece is how the Fed has painted itself in a corner with its super low interest rate policy. In fact, the Bernanke Fed found out when it announced what was intended to be an attenuated exit. Mr. Market responded with the "taper tantrum" and the central bank went into "Never mind" mode.

... snip ...

The SECTREAS had lobbied congress for TARP funds to buy TBTF offbook toxic assets, but with only $700B appropriated, it wouldn't have hardly made a dent (just the four largest TBTF will still carrying $5.2T YE2008) ... TARP was then used for other things and Federal Reserve did the real bailout ... buying a couple trillion in offbook toxic assets at 98cents on the dollar and providing tens of trillions in ZIRP funds. The FED fought a hard legal battle to prevent making public what it was doing. When they lost, the chairman held a press conference and said that he had expected TBTF/wallstreet to help mainstreet, when they didn't he had no way to force them (although that didn't stop the ZIRP funds). Note that the chairman had been selected (in part) for having been a student of the depression where the FED had tried something similar with the same results (so there should have not been expectation of anything different this time).

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
FED chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
Too Big To Fail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
toxic-cdo posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
ZIRP posts
https://www.garlic.com/~lynn/submisc.html#zirp

--
virtualization experience starting Jan1968, online at home since Mar1970

How the Net Was Won

From: Lynn Wheeler <lynn@garlic.com>
Subject: How the Net Was Won
Date: 09 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#55 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#57 How the Net Was Won

another teething story; 17jun1995, the internet facing servers at the largest (dialup) online service provider started crashing ... and continued while they had the usual experts in to diagnose the problem. 17aug, one of their consultants flew out to the west coast and bought me a hamburger after work (a little place on el camino south of page mill) ... while i ate the hamburger, he explained the symptoms. after i finished the hamburger, I explained the problem (sort of a crack between the RFC specifications and the way TCP code was written) and gave him a quick&dirty patch that would stop the crashes, which he deployed that evening. I then contacted the major server vendors to incorporate a fix .... but they said nobody was complaining (i.e. the online service provider didn't want their experience in the news). Almost exactly a year later it hit the press when a (different) service provider in NY was having the problem ... and the server vendors pat themselves on the back that they were able to ship fixes within a month.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

misc. past posts mentioning the problem
https://www.garlic.com/~lynn/2021c.html#69 Online History
https://www.garlic.com/~lynn/2017h.html#119 AOL
https://www.garlic.com/~lynn/2017g.html#81 Running unsupported is dangerous was Re: AW: Re: LE strikes again
https://www.garlic.com/~lynn/2017e.html#15 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2015e.html#25 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2013i.html#14 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012o.html#68 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#60 Core characteristics of resilience
https://www.garlic.com/~lynn/2008n.html#35 Builders V. Breakers
https://www.garlic.com/~lynn/2008l.html#21 recent mentions of 40+ yr old technology
https://www.garlic.com/~lynn/2008b.html#34 windows time service
https://www.garlic.com/~lynn/2005c.html#51 [Lit.] Buffer overruns

--
virtualization experience starting Jan1968, online at home since Mar1970

CICS Product 54yrs old today

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CICS Product 54yrs old today
Date: 09 July, 2023
Blog: Facebook
I had taken two credit hr intro to fortran/computers ... at the end of the semester I was hired to rewrite 1401 MPIO in 360 assembler for 360/30 (Univ. had been sold 360/67 for tss/360 to replace 709/1401, got a 360/30 to replace 1401 pending arrival of 360/67). 360/30 had 1401 emulation so could have continued to run 1401 MPIO ... but objective was to start getting 360 experience. The univ. shutdown the datacenter over the weekends and I would have the whole place dedicated (although 48hrs, w/o sleep made monday classes a little hard). Was given a bunch of hardware&software manuals and got to design my own monitor, interrupt handlers, device drivers, error recovery, storage manager and within a few weeks had 2000 card assember program. Then within a year of intro class, was hired fulltime responsible for OS/360 (tss/360 never came to production fruition). Then Univ. library was given an ONR grant for online catalog and was also selected to be betatest site for the coming CICS product ... and debugging CICS was added to my tasks. The 1st "bug" was CICS wouldn't come up. Turns out CICS hadn't documented some BDAM options that had been hard coded and library had built BDAM datasets with different set of options.

Other CICS lore (gone 404, but lives on at wayback machine):
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20060325095613/http://www.yelavich.com/history/ev200401.htm
https://web.archive.org/web/20090107054344/http://www.yelavich.com/history/ev200402.htm

CICS &/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

mainframe bus wars, How much space did the 68000 registers take up?

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: mainframe bus wars, How much space did the 68000 registers take up?
Newsgroups: comp.arch
Date: Mon, 10 Jul 2023 09:56:54 -1000
John Levine <johnl@taugh.com> writes:

Later on IBM made several PC add-in cards that ran 370 code. They worked fine but sold poorly because nobody wanted to run mainframe software on PCs.


VM370/CMS had increasingly gotten bloated (memory and disk I/O) since CP67/CMS days running on 256kbyte 360/67. Prototype XT/370 had 384kbyte 370 memory and all I/O was done with messages with the 8088 that did actual physical I/O. I did various (single CMS user) benchmarks showing there was lot of paging requiring messaging 8088 to do page i/o on the XT hard disk (something like 5times slower than mainframe disks). They blamed me for having to upgrade to 512kbyte 370 memory before 1st shipments to customers (to address frequent page thrashing).

Then CMS interactive applications had gotten quite filesystem intensive (especially compared to similar PC applications that were highly optimize to do lots of processing with minimal filesystem operations) ... again lots of disk I/O via messages with the 8088 (doing actual I/O with XT hard disk significantly slower than mainframe disks).

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters ... I had several internal datacenters getting 90precentile .11sec system trivial interactive response (compare to sandard VM370 with similar mainframe hardware configurations and workloads getting avg .2-.3sec system trivial interactive response). XT/370 CMS trivial interactive response tended to be worse than both similar, native PC applications as well many standard mainframe systems.

VM370/CMS had also tried to optimize caching of frequently/commonly used applications/operations (in the larger real mainframe memories, somewhat masking the bloat since CP67/CMS) ... which would always result in real disk I/O on XT/370). XT/370 got about 100KIPS (.1MIPS) 370 ... but also compared poorly with multi-user 370/125 (which was compareable memory and processing as XT/370 ... but much faster real disk I/O).

caveat: VM370/CMS was never officially available for 370/125 ... but I got con'ed into doing it for a customer ... eliminating some of the bloat that occured in the CP67/CMS -> VM370/CMS transition.

trivia: mainframe balanced configuration for target throughput; mid-80s, the largest IBM mainframe 3090 was initially configured with number of I/O channels expected to give target throughput. The disk division had come out with 3380 having 3mbyte/sec transfer (compared to previous 3330 having .8mbyte/sec). However the associated disk controller, 3880 supported 3mbyte/sec transfer, but had an extremely slow processor for all other operations (compared to the 3330, 3830 disk controller) which significantly increased elapsed channel busy per operation (offsetting the faster transfer). Eventually 3090 realized that they had to significantly increase the number of channels (to compensate for the controller protocol channel busy overhead). Eventually marketing respun it as the big increase in number of channels represented a fabulous I/O machine (when it was necessary to compensate for the 3880 controller busy).

Something similar happened with the IBM "FICON" protocol running over Fibre Channel Standard. The most recent public, published numbers I've found was 2010 z196 "peak I/O" benchmark getting 2M IOPS with 104 FICON (running over 104 FCS). About the same time a FCS was announced for E5-2600 blades claiming over a million IOPS (two such FCS having higher throughput than 104 FICON running over 104 FCS).

FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some xt/370 posts
https://www.garlic.com/~lynn/2021h.html#77 IBM XT/370
https://www.garlic.com/~lynn/2020.html#39 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2016b.html#29 Qbasic
https://www.garlic.com/~lynn/2013o.html#8 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013l.html#30 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2012p.html#8 AMC proposes 1980s computer TV series Halt & Catch Fire
https://www.garlic.com/~lynn/2008r.html#38 "True" story of the birth of the IBM PC
https://www.garlic.com/~lynn/2005f.html#6 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)

some posts mentioning 3090, 3880, number of channels
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019.html#79 How many years ago?
https://www.garlic.com/~lynn/2018.html#0 Intrigued by IBM
https://www.garlic.com/~lynn/2017k.html#25 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017d.html#1 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2016d.html#24 What was a 3314?
https://www.garlic.com/~lynn/2015.html#36 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2013m.html#78 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2012o.html#27 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012o.html#22 Assembler vs. COBOL--processing time, space needed
https://www.garlic.com/~lynn/2012c.html#23 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2011f.html#0 coax (3174) throughput
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Before The Cloud

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Before The Cloud
Date: 10 July, 2023
Blog: Facebook
some of the MIT CTSS/7094 people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to the 5th flr for Multics.
https://en.wikipedia.org/wiki/Multics
Others went to the IBM cambridge science center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

on the 4th floor and did virtual machines (1st CP40/CMS on 360/40 with virtual memory hardware mods; morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, later CP67/CMS mophs into VM370/CMS), the internal network, CTSS RUNOFF ported to CMS as SCRIPT, bunch of online and performance apps, GML invented in 1969 and GML tag processing added to SCRIPT (after a decade, morphs into ISO standard SGML, after another decade morphs into HTML at CERN). There was also a couple commercial spinoffs of the science center in the 60s, providing virtual-machine online services.

One of the 70s commercial online services, TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
provided their CMS-based online computer conferencing, "free" to the mainframe user group SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
in Aug1976 as VMSHARE, archives here
http://vm.marist.edu/~vmshare

The internal network was larger than ARPANET/Internet from just about the beginning until sometime mid/late 80s. The technology was also used for the corporate sponsored, univ network (also larger than ARPANET/Internet for a time)
https://en.wikipedia.org/wiki/BITNET

First webserver in the US was on Stanford SLAC (CERN sister institution) VM370/CMS system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

trivia: before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on cp67/cms at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
commercial virtual machine online service posts
https://www.garlic.com/~lynn/submain.html#online
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML/SGML at the science center
https://www.garlic.com/~lynn/submain.html#sgml

--
virtualization experience starting Jan1968, online at home since Mar1970

CICS Product 54yrs old today

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: CICS Product 54yrs old today
Date: 10 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#60 CICS Product 54yrs old today

comment about IBM forever commitment to 360, forgetting that in the 1st half of the 70s, IBM had the future system effort that was completely different from 370 and was going to completely replace it. Internal politics during the FS period was killing off 370 (claims the lack of new IBM 370 during the period is responsible for clone 370 makers getting their market foothold). Whne FS finally implodes, there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. Some more info:
http://www.jfsowa.com/computer/memo125.htm

From Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

tome about Learson failed in his effort to stop the bureaucrats, careerists and MBAs from destroying the Watson legacy:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

trivia: I continued to work on 360&370 stuff all during FS period, even periodically ridiculing what they were doing ... which wasn't exactly a career enhancing activity.

Other trivia: Amdahl left IBM not long after ACS/360 effort was killed (executives were afraid that it would advance the state of the art too fast and IBM would loose control of the market) ... some more info (including features that show up more than 20yrs later with ES/9000) ... more ACS/360 background:
https://people.cs.clemson.edu/~mark/acs_end.html

Amdahl gave talk at MIT in the early 70s not long after forming his company. He was asked what business rational did he use with the venture people for starting his company. He said something about that even if IBM was to totally walk away from 360s, there was enough 360 software to keep him in business for the next 30 yrs. In later years, he claims that he left before FS started and had no knowledge of it (although his reference at the MIT talk sort of implied it).

future system effort posts
https://www.garlic.com/~lynn/submain.html#futuresys
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/360, 1964

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/360, 1964
Date: 11 July, 2023
Blog: Facebook
I took 2 credit hr intro to fortran/computers. At end of semester I was hired to reimplement 1401 MPIO in assembler for 360/30. Univ had 709 tape->tape with 1401 for unit record front-end. The univ was sold a 360/67 for TSS/360, replacing 709/1401. Pending 360/67, the 1401 was replaced with 360/30 (had 1401 emulation so could continue to run MPIO ... but I assume my job was part of getting 360 experience). I was given a bunch of hardware&software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. The univ shutdown datacenter on weekends and I had the place dedicated (although 48 hrs w/o sleep made Monday classes hard) and within a few weeks I had 2000 card assembler program.

Within a year of taking intro class, the 360/67 was installed and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition so ran as 360/65).

709 ... fortran student job took under a sec

Initially on 360/67 (as 360/65 with OS/360), they took over a minute ... installing HASP cut time in half. Tore apart stage2 sysgen and put back together to carefully place datasets and pds members to optimize arm seek and pds directory multitrack search, cutting another 2/3rds to 12.9secs. Never got better than 709 until I install univ of waterloo "WATFOR"

some recent posts mentioning 709, MPIO, & WATFOR
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022f.html#8 CICS 53 Years
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#69 Mainframe History: How Mainframe Computers Evolved Over the Years
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

CICS Product 54yrs old today

From: Lynn Wheeler <lynn@garlic.com>
Subject: CICS Product 54yrs old today
Date: 11 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#60 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today

About the turn of the century, I visited a datacenter that had a large IBM mainframe with a banner over it that said like "129 CICS Instances" ... this was before 2004 multiprocessor exploitation.

cics/bdam posts
https://www.garlic.com/~lynn/submain.html#cics

posts mentioning multiple CICS Instances:
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022g.html#87 CICS (and other history)
https://www.garlic.com/~lynn/2021g.html#70 the wonders of SABRE, was Magnetic Drum reservations 1952
https://www.garlic.com/~lynn/2021.html#80 CICS
https://www.garlic.com/~lynn/2018d.html#68 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017k.html#57 When did the home computer die?
https://www.garlic.com/~lynn/2017g.html#90 Stopping the Internet of noise
https://www.garlic.com/~lynn/2016.html#83 DEC and The Americans
https://www.garlic.com/~lynn/2013o.html#54 Curiosity: TCB mapping macro name - why IKJTCB?
https://www.garlic.com/~lynn/2013l.html#18 A Brief History of Cloud Computing
https://www.garlic.com/~lynn/2013g.html#39 Old data storage or data base
https://www.garlic.com/~lynn/2010.html#21 Happy DEC-10 Day

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/360, 1964

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/360, 1964
Date: 11 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964

Before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton datacenter is possibly largest in the world, something like couple hundred million in IBM 360s (60s $$$), 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and CFO ... who only had a 360/30 up at Boeing Field for payroll (although they enlarge that machine room to install a 360/67 for me to play with when I'm not doing other stuff). 747#3 is flying the skies of Seattle getting flt certification and there is cabin mockup south of Boeing Field ... tour claims a 747 would never have fewer than four jetways at the gate (because of the large number of people). There is a disaster plan to replicate Renton datacenter up at the new 747 plant in Everett (Mt. Rainier heats up and the resulting mud slide takes out the Renton datacenter).

Both the Boeing and IBM branch people tell story about the Boeing marketing rep on 360 announcement day (this was when IBM sales were still on straight commission). Boeing walks in with large order ... making the marketing rep the highest paid IBM employee that year. For the next year, IBM changes to "quota" ... and in January Boeing walks in with another large 360 order, making the marketing rep's quota for the year. IBM adjusts his quota, he leaves IBM a short time later.

some posts mentioning Boeing CFO & BCS, as well as IBM commission and quota
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022d.html#100 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#48 IBM Quota
https://www.garlic.com/~lynn/2019d.html#60 IBM 360/67
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2019b.html#38 Reminder over in linkedin, IBM Mainframe announce 7April1964

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/360, 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/360, 1964
Date: 11 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964

Three people from the science center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
came out to install CP67/CMS at the univ (3rd after CSC itself and MIT Lincoln Labs) ... I mainly played with it during my 48hr weekend dedicated time ... when I rewrote a lot of the CP67/CMS code ... archived post with part of SHARE (mainframe user group)
https://en.wikipedia.org/wiki/SHARE_(computing)
OS360/MFT14&CP67 presentation about some early work (after having CP67 for six months)
https://www.garlic.com/~lynn/94.html#18
trivia: CP67/CMS later morphs into VM370/CMS ... some old history
https://www.leeandmelindavarian.com/Melinda#VMHist

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent posts referring to the MFT14/CP67 presentation at SHARE
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#88 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#62 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2022d.html#30 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#22 IBM Cloud to offer Z-series mainframes for first time - albeit for test and dev
https://www.garlic.com/~lynn/2022b.html#20 CP-67
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2022.html#25 CP67 and BPS Loader
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021j.html#0 IBM Lost Opportunities
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021h.html#71 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021f.html#43 IBM Mainframe
https://www.garlic.com/~lynn/2021e.html#65 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021e.html#19 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021d.html#38 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021d.html#37 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#81 The Golden Age of computer user groups
https://www.garlic.com/~lynn/2021b.html#64 Early Computer Use
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS
https://www.garlic.com/~lynn/2020.html#26 What's Fortran?!?!
https://www.garlic.com/~lynn/2020.html#8 IBM timesharing terminal--offline preparation?

--
virtualization experience starting Jan1968, online at home since Mar1970

Tax Avoidance

From: Lynn Wheeler <lynn@garlic.com>
Subject: Tax Avoidance
Date: 12 July, 2023
Blog: Facebook
As Nonprofit Hospitals Reap Big Tax Breaks, States Scrutinize Their Required Charity Spending
https://www.nakedcapitalism.com/2023/07/as-nonprofit-hospitals-reap-big-tax-breaks-states-scrutinize-their-required-charity-spending.html

The takeover by Tower Health meant the 219-bed Pottstown Hospital no longer had to pay federal and state taxes. It also no longer had to pay local property taxes, taking away more than $900,000 a year from the already underfunded Pottstown School District, school officials said.

The school system appealed Pottstown Hospital's new nonprofit status, and earlier this year a state court struck down the facility's property tax break. It cited the "eye-popping" compensation for multiple Tower Health executives as contrary to how Pennsylvania law defines a charity.

The Pottstown case reflects the growing scrutiny of how much the nation's nonprofit hospitals spend -- and on what -- to justify billions in state and federal tax breaks. In exchange for these savings, hospitals are supposed to provide community benefits, like care for those who can't afford it and free health screenings.


.... snip ...

US tech giants restructured to sidestep Australian tax law, following PwC advice. The PwC Australia tax leak scandal continues to grow amid revelations Uber and Facebook set up new company structures weeks before a new tax avoidance law was set to kick in.
https://www.icij.org/investigations/luxembourg-leaks/us-tech-giants-restructured-to-sidestep-australian-tax-law-following-pwc-advice/

tax fraud, tax evasion, tax loopholes, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
private equity posts (PE has been heavily moving into health care, rest homes, medical practices, hospitals, etc)
https://www.garlic.com/~lynn/submisc.html#private.equity

--
virtualization experience starting Jan1968, online at home since Mar1970

Fortran, IBM 1130

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Fortran, IBM 1130
Date: 13 July, 2023
Blog: Facebook
Some of the MIT 7094/CTSS people went to MULTICS on the 5th flr, others went to the IBM cambridge science center on the 4th flr
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
the science center had a 2250mod4 ... 2250 graphics terminal with 1130 as controller.
https://en.wikipedia.org/wiki/IBM_2250
somebody had ported PDP1 space wars to the 1130/2250.
https://en.wikipedia.org/wiki/Spacewar%21

1130 article
https://en.wikipedia.org/wiki/IBM_1130
also mentioning "1500"
https://en.wikipedia.org/wiki/IBM_1500
in the 60s, my wife had a job at Naval Academy in Annapolis, programming the 1500
https://apps.dtic.mil/sti/citations/AD0746855
https://trid.trb.org/view/7982
https://web.archive.org/web/20090604181740/http://www.uofaweb.ualberta.ca/educationhistory/IBM1500Systems_NorthAmerica.cfm

I took 2 credit hr intro to fortran/computers. At end of semester I was hired to reimplement 1401 MPIO in assembler for 360/30. Univ had 709 tape->tape with 1401 for unit record front-end. The univ was sold a 360/67 for TSS/360, replacing 709/1401. Pending 360/67, the 1401 was replaced with 360/30 (had 1401 emulation so could continue to run MPIO ... but I assume my job was part of getting 360 experience. I was given a bunch of hardware&software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. The univ shutdown datacenter on weekends and I had the place dedicated (although 48 hrs w/o sleep made Monday classes hard) and within a few weeks I had 2000 card assembler program.

Within a year of taking intro class, the 360/67 was installed and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition so ran as 360/65).

709 ... fortran student job took under a sec. Initially on 360/67 (as 360/65 with OS/360), they took over a minute ... installing HASP cut time in half. Tore apart stage2 sysgen and put back together to carefully place datasets and pds members to optimize arm seek and pds directory multitrack search, cutting another 2/3rds to 12.9secs. Never got better than 709 until I install univ of waterloo "WATFOR"

3 people from the science center came out and installed CP67/CMS at the univ (3rd install after cambridge itself and MIT Lincoln Labs) ... mostly played with it during my dedicated weekend time; rewrote a lot of the CP67&CMS code ... old archived post with part of 60s SHARE presentation (a few months after CP67/CMS install) about some of the MFT14&CP67 work
https://www.garlic.com/~lynn/94.html#18

Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities, sort of early "cloud"). I think Renton datacenter is possibly largest in the world, couple hundred million in IBM systems, 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between the Renton director and CFO who only had a 360/30 up at Boeing Field for payroll (although they enlarge the machine room for a 360/67 for me to play with, when I'm not doing other stuff).

Early 80s, I'm introduced to John Boyd and would sponsor his briefings. One of his stories was he was very vocal that the electronics across the trail wouldn't work and possibly as punishment he is put in command of spook base (about the same time I'm at Boeing) and he would say that "spook base" had the largest air conditioned bldg in that part of the world ... one of his biographies claims spook base was a $2.5B "windfall" for IBM (ten times Renton) Bio also references when he was instructor at Nellis, he was possibly best fighter pilot in the world. Spook base ref (gone 404, but lives on at wayback machine):
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
above has picture referencing 2250s inside the datacenter ... but looks more like row of operators at radar scopes. Also mentions hooked to numerous 1130/2250 systems.

cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Boyd posts and web refs
https://www.garlic.com/~lynn/subboyd.html

some recent posts mentioning 1401 MPIO, Fortran, WATFOR, Boeing CFO
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C

other recent posts mentioning john boyd and "spook base":
https://www.garlic.com/~lynn/2023d.html#33 IBM 360s
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#0 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#49 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022e.html#65 IBM Wild Ducks
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022d.html#102 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022b.html#27 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#30 CP67 and BPS Loader
https://www.garlic.com/~lynn/2021i.html#89 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#6 The Kill Chain: Defending America in the Future of High-Tech Warfare
https://www.garlic.com/~lynn/2021h.html#64 WWII Pilot Barrel Rolls Boeing 707
https://www.garlic.com/~lynn/2021h.html#35 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021b.html#19 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#48 IBM Quota

--
virtualization experience starting Jan1968, online at home since Mar1970

Who Employs Your Doctor? Increasingly, a Private Equity Firm

From: Lynn Wheeler <lynn@garlic.com>
Subject: Who Employs Your Doctor? Increasingly, a Private Equity Firm
Date: 13 July, 2023
Blog: Facebook
Who Employs Your Doctor? Increasingly, a Private Equity Firm. A new study finds that private equity firms own more than half of all specialists in certain U.S. markets.
https://web.archive.org/web/20230710222207/https://www.nytimes.com/2023/07/10/upshot/private-equity-doctors-offices.html

trivia: the industry got such a bad reputation during the S&L crisis that they changed the industry name to private equity and "junk bonds" became "high-yield bonds". private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/360, 1964

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/360, 1964
Date: 13 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#67 IBM System/360, 1964
also
https://www.garlic.com/~lynn/2023d.html#60 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#65 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130

After joining IBM Science Center, one of my hobbies was enhanced production operating systems for internal datacenters (including the world-wide online sales&marketing support HONE system was long time customer). Then after decision to add virtual memory to all 370s, there was a CP67 modified to run on 370 (along with device drivers for 2305 & 3330 dasd). This "CP370" was used for a long time before VM370 was available. Also the morph of CP67 to VM370 greatly simplified and/or dropped a lot of features ... like multiprocessor support. In 1974, I spent some amount of time putting CP67 stuff back into VM370 before was able to start shipping enhanced VM370 to internal datacenters.

I had an automated benchmarking infrastructure for CP67 and one of the first things was migrating it to VM370 ... unfortunately the automated benchmarks would consistently crash VM370 ... so the next thing was porting the CP67 kernel serialization mechanism to VM370 ... so could reliably run benchmarks w/o the system consistently crashing.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
benchmarking, workload profile, capacity planning posts
https://www.garlic.com/~lynn/submain.html#benchmark
availability posts
https://www.garlic.com/~lynn/submain.html#availability
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

... other topic drift ... A decade or so ago, was asked to track down decision to add virtual memory to all 370s, found staff to the executive making the decsion ... basically MVT storage management was so bad that region sizes had to be specified four times larger than typically used. A typical 1mbyte 370/165 would only be running four regions concurrently, insufficient to keep system busy and justified. VS2 initially going to 16mbyte virtual memory (similar to running MVT in CP67 16mbyte virtual machine) could have the number of concurrently running increased by a factor of four times (with little or no paging). Archived post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Virtual Machine History

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Virtual Machine History
Date: 14 July, 2023
Blog: Linkedin
when I transferred to SJR, I got to wander around most datacenters in &/or around silicon valley (both IBM and customers), including bldg14 (disk engineering) and bldg15 (disk product test) across the street. They were running stand-alone mainframe, prescheduled, 7x24 testing ... they said that they had recently tried MVS, but it had 15min mean-time-between-failure (requiring manual re-ipl) in their environment. I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of on-demand, concurrent testing (greatly improving productivity). I then write a research report about the work, happening to mention the MVS 15min MTBF ... bringing down the wrath of the MVS org on my head (informally was told they tried to have me separated from the company, when that didn't work, they tried other attacks). Some years later, 3380s were about to ship and FE had 57 simulated errors they felt were likely to occur. MVS was failing for all 57 and in 2/3rds of the cases, no indication of what caused the failure ... I didn't feel badly (joke about MVS recovery code ... had repeatedly covered up the original error by the time MVS eventually fails).

At univ, I had taken a 2 credit hr intro to fortran/computers. The univ had 709/1401 and had been sold a 360/67 for TSS/360. Within a year of taking intro class, the 360/67 replaces 709/1401 and I had been hired fulltime responsible for OS/360 (TSS/360 never came to fruition so 67 was run ad 65). Univ. shutdown datacenter over the weekends, and I would have the whole place dedicated for 48hrs straight. 709 tape->tape had run student fortran jobs in under a second. Initially on 360/67-OS/360, they ran over a minute. I installed HASP and the time was cut in half. I then tore apart STAGE2 SYSGEN and put it back together for placement of datasets and PDS member, optimizing arm seek and PDS directory multi-track search ... cutting time by another 2/3rds to 12.9secs. Student fortran never got better than 709 until I install WATFOR. Then three CSC people come out to install CP67 (3rd after CSC itself and MIT Lincoln Labs) ... and I mostly got to play with it on weekends. Old archive post with pieces of '68 SHARE presentation (after playing w/CP67 for several months) about both OS/360 optimization and rewriting lots of CP67/CMS code.
https://www.garlic.com/~lynn/94.html#18

knights mar1978
http://mvmua.org/knights.html

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

A few posts mentioning univ. 709/1401, Fortran, WATFOR, CP67/CMS
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#88 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#62 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Virtual Machine History

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Virtual Machine History
Date: 15 July, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History

slight topic drift CERN&SLAC did 168E & 3081E (sufficient 370 to run Fortran for initial data reduction from sensors)
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf

Some of the MIT CTSS/7094 people went to the 5th flr for multics, others went to the IBM science center on the 4th flr and did virtual machines ... initially CP40/CMS on 360/40 with hardware mods implementing virtual memory, morphs into CP67/CMS when 360/67 with virtual memory standard became available (later morphs into vm/370 when decision was made to add virtual memory to all 370s). CTSS "RUNOFF" was redone for CMS as "SCRIPT". In 1969, GML was invented at science center and GML tag processing added to SCRIPT. After a decade GML morphs into ISO standard SGML, and after another decade morphs into HTML at CERN. First webserver in the US was on SLAC's VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

Other CSC trivia: science center originally wanted a 360/50 to modify with virtual memory, but all the available 50s was going to FAA ATC program, so had to settle for 40. Book about the FAA/ATC effort
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514

Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that

... snip ...

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

posts mentioning 168e/3081e
https://www.garlic.com/~lynn/2023d.html#34 IBM Mainframe Emulation
https://www.garlic.com/~lynn/2023b.html#92 IRS and legacy COBOL
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2021b.html#50 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2020.html#40 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017k.html#47 When did the home computer die?
https://www.garlic.com/~lynn/2017j.html#82 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017j.html#81 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017d.html#78 Mainframe operating systems?
https://www.garlic.com/~lynn/2017c.html#10 SC/MP (1977 microprocessor) architecture
https://www.garlic.com/~lynn/2016e.html#24 Is it a lost cause?
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2015c.html#52 The Stack Depth
https://www.garlic.com/~lynn/2015b.html#28 The joy of simplicity?
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2015.html#79 Ancient computers in use today
https://www.garlic.com/~lynn/2015.html#69 Remembrance of things past
https://www.garlic.com/~lynn/2014.html#85 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013l.html#27 World's worst programming environment?
https://www.garlic.com/~lynn/2012l.html#72 zEC12, and previous generations, "why?" type question - GPU computing
https://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]

posts mentioning Fox and FAA/ATC effort
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022b.html#97 IBM 9020
https://www.garlic.com/~lynn/2022.html#23 Target Marketing
https://www.garlic.com/~lynn/2021i.html#20 FAA Mainframe
https://www.garlic.com/~lynn/2021f.html#9 Air Traffic System
https://www.garlic.com/~lynn/2021e.html#13 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket
https://www.garlic.com/~lynn/2019c.html#44 IBM 9020
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Virtual Machine History

From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Virtual Machine History
Date: 15 July, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History

In the early days of REX (before renamed REXX and released to customers) I wanted to demonstrate that it wasn't just another pretty scripting language. I chose to rewrite a large assembler code application (problem&dump analyzer) in REX with objective of ten times the function and ten times the performance (some slight of hand from assembler to interpreted REX) working half-time over three months. Turns out I finish early, and so add a library of automated functions that look for known failure and problem signatures. I expected it would be released to customers, replacing the existing assembler implementation, but for various reasons it wasn't (it was in use by most internal datacenters and customer PSRs). I do get permission to give presentations at customer user group meetings on how I did the implementation ... and within a few months similar implementations start appearing at customer shops.

Later I get some email from the 3092 group ... 3090 service processor (started out as 4331 with highly customized VM370 release 6 with all service screens implemented in CMS IOS3270, but morphs into a pair of 4361s, note it also requires a pair of 3370FBA, even for MVS shops that have never had FBA support) ... wanting to include "DUMPRX"
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

And it's gone -- The true cost of interrupts

From: Lynn Wheeler <lynn@garlic.com>
Subject: And it's gone -- The true cost of interrupts
Date: 15 July, 2023
Blog: Facebook
And it's gone -- The true cost of interrupts
https://devm.io/careers/aaaand-gone-true-cost-interruptions-128741

from "Real Programmers" ... long ago and far away:

Real Programmers never work 9 to 5. If any real programmers are around at 9am, it's because they were up all night.

... snip ...

past posts mentioning "Real Programmers":
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#61 Software Process
https://www.garlic.com/~lynn/2022h.html#90 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022d.html#28 Remote Work
https://www.garlic.com/~lynn/2021d.html#49 Real Programmers and interruptions
https://www.garlic.com/~lynn/2019.html#6 Fwd: It's Official: Open-Plan Offices Are Now the Dumbest Management Fad of All Time | Inc.com
https://www.garlic.com/~lynn/2018b.html#56 Computer science hot major in college (article)
https://www.garlic.com/~lynn/2017i.html#53 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017h.html#27 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2016f.html#19 And it's gone --The true cost of interruptions
https://www.garlic.com/~lynn/2016d.html#72 Five Outdated Leadership Ideas That Need To Die
https://www.garlic.com/~lynn/2014.html#24 Scary Sysprogs and educating those 'kids'
https://www.garlic.com/~lynn/2014.html#23 Scary Sysprogs and educating those 'kids'
https://www.garlic.com/~lynn/2013m.html#16 Work long hours (Was Re: Pissing contest(s))
https://www.garlic.com/~lynn/2002e.html#39 Why Use *-* ?
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:

--
virtualization experience starting Jan1968, online at home since Mar1970

Private, Public, Internet

From: Lynn Wheeler <lynn@garlic.com>
Subject: Private, Public, Internet
Date: 16 July, 2023
Blog: Linkedin
... mid-90s, one of the big transitions to internet ... there were presentations at financial industry conferences about moving their dedicated consumer dial-up banking to standard browsers&internet ... basically they were offloading an enormous amount of their dedicated infrastructure costs onto ISPs where those costs were spread across a large number of different operations and services. Among other things they claimed they had to maintain an avg. of 64 different apps & device drivers for different computers, operating systems, operating system versions and hardware modems ... along with dedicated trouble desks and call centers that supported all the different versions

part of the issue was that dial-up banking wasn't large enough market for wide-spread standardization

consumer dial-up banking posts
https://www.garlic.com/~lynn/submisc.html#dialup-banking

--
virtualization experience starting Jan1968, online at home since Mar1970

Private Equity Firms Increasingly Held Responsible for Healthcare Fraud

From: Lynn Wheeler <lynn@garlic.com>
Subject: Private Equity Firms Increasingly Held Responsible for Healthcare Fraud
Date: 16 July, 2023
Blog: Linkedin
Private Equity Firms Increasingly Held Responsible for Healthcare Fraud
https://waterskraus.com/private-equity-firms-increasingly-held-responsible-for-healthcare-fraud/

trivia: the industry got such a bad reputation during the S&L crisis that they changed the industry name to private equity and "junk bonds" became "high-yield bonds". private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
risk, fraud, exploits, threats, vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/360, 1964

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/360, 1964
Date: 17 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#67 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
also
https://www.garlic.com/~lynn/2023d.html#60 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#65 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130

Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS, others went to the IBM science center on the 4th flr and did virtual machines (initially CP40/CMS for 360/40 w/hardware modifications for virtual memory, morps into CP67/CMS when 360/67 standard with virtual memory becomes available, later morphs into VM370 when decision was made to add virtual memory to all 370s). In the 60s, there were two online commercial spinoffs of the science center.

There was lots of work by the science center and the commercial online virtual machine service bureaus to provide 7x24 access. Part of this was supporting dark room, unattended operations. This was also back in the days that IBM leased/rented systems. Charges were based on the CPU "system meter" that ran anytime any CPU or channel was busy ... Especially for offshift, special (terminal) channel programs were developed to allow channels to go idle, but immediately wakeup anytime there was arriving characters.

Note CPU(s)/channels all had to be idle (i.e. all system components) for at least 400ms before the system meter would stop (trivia: long after IBM had switched from leased/rented to sales and billing was no longer based on system meter, MVS still had a timer task that woke up every 400ms, guaranteeing the system meter never stopped).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commerical, online, virtual machine service bureaus
https://www.garlic.com/~lynn/submain.html#online

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM System/360 JCL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM System/360 JCL
Date: 17 July, 2023
Blog: Facebook
I took 2 credit hr intro to fortran/computers. At end of semester I was hired to reimplement 1401 MPIO in assembler for 360/30. Univ had 709 tape->tape with 1401 for unit record front-end. The univ was sold a 360/67 for TSS/360, replacing 709/1401. Pending 360/67, the 1401 was replaced with 360/30 (had 1401 emulation so could continue to run MPIO ... but I assume my job was part of getting 360 experience. I was given a bunch of hardware&software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. The univ shutdown datacenter on weekends and I had the place dedicated (although 48 hrs w/o sleep made Monday classes hard) and within a few weeks I had 2000 card assembler program. trivia: learned to disassemble 2540 can clean reader/punch, clean 1403, clean tape drives 1st thing sat. also sometimes production had finished early and all the machines were powered off when I came in Sat. morning. I would hit 360/30 power-on and periodically it wouldn't complete power-up sequence. Some amount of trial&error, I learned to put all controllers in CE-mode, power on 360/30, and then individually power-on controllers before taking them out of CE-mode

Within a year of taking intro class (and 360/67 arriving), I was hired fulltime responsible for os/360. One of the issues was almost every new release (and sometimes large PTF packages) would break some JCL (usually in the administration production workload, needed extensive testing before putting system changes into production).

Other trivia: student fortran job took less than second on 709 ... but started out over a minute on os/360 (360/67 running as 360/65, TSS/360 never quite came to production fruition). I install HASP and cut student fortran job time in half. I then start doing extensive rework of STAGE2 SYSGEN, placing datasets and PDS members to optimize arm seek (and PDS directory multi-track search), cutting another 2/3rds to 12.9secs (student fortran jobs never got better than 709 until I install univ. of waterloo WATFOR).

Then some people came out from Cambridge Science Center to install CP67/CMS ... and I'm mostly restricted to playing with it in my weekend dedicated time. Note some of the MIT CTSS/7094 went to the 5th flr and did MULTICS, others went to the IBM cambridge science center and did virtual machines, internal corporate network, lots of performance and interactive work, invented GML in 1969 (CTSS RUNOFF had been done for CMS as SCRIPT and then after GML was invented, GML tag processing was added to SCRIPT, after a decade it morphs into ISO standard SGML, and after nother decade it morphs HTML (at CERN).

After graduating, I joined the science center and one of my hobbies was enhanced production operating systems for internal datacenters. After decision was made to add virtual memory to all 370s, CP67/CMS was redone as VM370/CMS (although lots of CP67/CMS stuff was either greatly simplified and/or dropped. I spent some amount of 1974, upgrading VM370/CMS to CP67/CMS leavel. I had an automated benchmarking system where I could specify number of users and what kind of workload each user would run. This was one of the first things that I moved to VM370/CMS ... but VM370 would constantly crash ... so the next item was the CP67 kernel serialization mechanism ... in order to get VM370/CMS to run through a set of benchmarks w/o crashing. It was 1975 before I had VM370 enhanced to where I could distribute it for internal datacenters.

After leaving IBM ... did some mainframe performance work. Around turn of the century, one datacenter had 40+ max. configured mainframe systems (@$30M, >$1.2B, constantly being upgraded), all running same 450K statement COBAL app ... number systems needed to finish financial settlement in the overnight batch window. They had large group managing the performance care&feeding of the Cobal app for decades (but possibly had gotten somewhat myopically focused). I used some performance analysis technology from the early 70s at the science center and found 14% improvement.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
runoff, script, gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml

some recent posts mentioning 709, 1401, mpio, fortran, watfor
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards

some recent posts mentioning 450K statement cobol app
https://www.garlic.com/~lynn/2023c.html#99 Account Transaction Update
https://www.garlic.com/~lynn/2023b.html#87 IRS and legacy COBOL
https://www.garlic.com/~lynn/2023.html#90 Performance Predictor, IBM downfall, and new CEO
https://www.garlic.com/~lynn/2022h.html#54 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022e.html#58 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#11 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#56 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021k.html#58 Card Associations
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#87 UPS & PDUs
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021c.html#61 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#4 Killer Micros
https://www.garlic.com/~lynn/2021.html#7 IBM CEOs

--
virtualization experience starting Jan1968, online at home since Mar1970

Airline Reservation System

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Airline Reservation System
Date: 18 July, 2023
Blog: Facebook
Late 80s, my wife did a short stint as European Amadeus (built off the old Eastern Airlines System/One res system) chief architect. She didn't remain long because she sided with Europe for X.25 (instead of SNA) and the IBM communication group got her replaced. It didn't do them much good, because Europe went with X.25 anyway and their replacement got replaced. In later years, I have some recollections of Amadeus office sign when traveling to/from Heathrow.
https://en.wikipedia.org/wiki/Amadeus_IT_Group#History

After leaving IBM in the early 90s, was brought into the largest res system to look at the ten impossible things they couldn't do ... initially look at "ROUTES" and given softcopy of complete OAG, i.e. all commercial airline flt segments for finding direct and connecting flts for customer origin/destination. I came back in two months with a ROUTES implementation that did all their ten impossible things implemented on RS/6000 ... which would require ten RS6000/990s to be able to handle every ROUTE request for all airlines&passengers in the world. It was initially 100 times faster (than mainframe version), but adding the ten impossible things, slowed it down to only ten times faster.

Then the hand-wringing started, the existing (mainframe) implementation had several hundred people managing/supporting ROUTES, which was all eliminated by the new implementation (ROUTES represented about 25% of processing and they wouldn't let me near FARES which represented 40%). I would claim that the existing mainframe implementations still had vestiges of 60s technology design trade-offs, and starting from scratch, was allowed to make completely different technology design trade-offs.

a few posts mentioning Amadeus, ROUTES, RS/6000
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#10 Google Cloud Launches Service to Simplify Mainframe Modernization
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2011d.html#43 Sabre; The First Online Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

Taligent and Pink

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Taligent and Pink
Date: 18 July, 2023
Blog: Facebook
10-11aug1995, i did a JAD with dozen or so taligent people on use of taligent for business critical applications. there were extensive classes/framework for GUI & client/server support, but various critical pieces were missing.

The net of the JAD was about a 30% hit to the taligent base (I think two new frameworks plus hits to the existing frameworks) to support business critical applications.

Taligent was also going thru rapid maturity (outside of the personal computing, GUI paradigm) ... a sample business application required 3500 classes in taligent and only 700 classes in a more mature object product targeted for the business environment.

i think that shortly after taligent vacated their building ... sun java group moved in.

... trivia ... the last thing we did at IBM was our HA/CMP product. It started out as HA/6000 for the NYTimes to move their newspaper (ATEX) off VAXCluster to RS/6000. I renamed it HA/CMP (High Availability Cluster Multi-Processing) when started doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors. I was then asked to write a section for the corporate continuous availability strategy document (however the section got pulled with both AS400/Rochester and mainframe/POK complained). Early 92, cluster scale-up is transferred for announce as IBM supercomputer and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.

Later we are brought in as consultants to small client/server startup. Two former Oracle people that we had worked with on cluster scale-up, were there responsible for something called "commerce server" and wanted to do payment transfers (on the server). The startup had also invented this technology that they called "SSL" they wanted to use, it is now frequently called "electronic commerce". I had absolute authority for payment transactions between webservers and gateway to the financial networks ... which somewhat led to doing the Taligent JAD. Postel also sponsored my talk on "Why Internet Isn't Business Critical Dataprocessing" ... in large part based on software, procedures & documentation that I had to do for "electronic commerce".

Taligent
https://en.wikipedia.org/wiki/Taligent
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous available strategy posts
https://www.garlic.com/~lynn/submain.html#available
payment transaction gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

specific posts mentioning Taligent JAD
https://www.garlic.com/~lynn/2017b.html#46 The ICL 2900
https://www.garlic.com/~lynn/2017.html#27 History of Mainframe Cloud
https://www.garlic.com/~lynn/2016f.html#14 New words, language, metaphor
https://www.garlic.com/~lynn/2010g.html#59 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2010g.html#37 16:32 far pointers in OpenWatcom C/C++
https://www.garlic.com/~lynn/2010g.html#15 Far and near pointers on the 80286 and later
https://www.garlic.com/~lynn/2009m.html#26 comp.arch has made itself a sitting duck for spam
https://www.garlic.com/~lynn/2008b.html#22 folklore indeed
https://www.garlic.com/~lynn/2007m.html#36 Future of System/360 architecture?
https://www.garlic.com/~lynn/2006n.html#20 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005f.html#38 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000.html#10 Taligent
https://www.garlic.com/~lynn/aadsm27.htm#48 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm24.htm#20 On Leadership - tech teams and the RTFM factor

posts mentioning Postel & "Why Interenet Isn't Business Critical Dataprocessing"
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#34 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#7 IBM100 - Rise of the Internet
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019.html#25 Are we all now dinosaurs, out of place and out of time?
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable

--
virtualization experience starting Jan1968, online at home since Mar1970

Taligent and Pink

From: Lynn Wheeler <lynn@garlic.com>
Subject: Taligent and Pink
Date: 19 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink

trivia: later we did do some business critical applications for the financial industry with company that had been formed by the guy (and some of his people) that had previously done the IBM FAA ATC system (was where got the 700 classes number compared to taligent 3500 classes for similar applications).

FAA ATC, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514

Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that.

... snip ...

Executive Qualities
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794

After 20 years in IBM, 7 as a divisional Vice President, Joe Fox had his standard management presentation -to IBM and CIA groups - published in 1976 -entitled EXECUTIVE QUALITIES. It had 9 printings and was translated into Spanish -and has been offered continuously for sale as a used book on Amazon.com. It is now reprinted -verbatim- and available from Createspace, Inc - for $15 per copy. The book presents a total of 22 traits and qualities and their role in real life situations- and their resolution- encountered during Mr. Fox's 20 years with IBM and with major computer customers, both government and commercial. The presentation and the book followed a focus and use of quotations to Identify and characterize the role of the traits and qualities. Over 400 quotations enliven the text - and synthesize many complex ideas.

... snip ...

some posts mentioning Joe Fox and Template
https://www.garlic.com/~lynn/2021e.html#13 IBM Internal Network
https://www.garlic.com/~lynn/2021.html#42 IBM Rusty Bucket
https://www.garlic.com/~lynn/2019b.html#88 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2019b.html#73 The Brawl in IBM 1964

--
virtualization experience starting Jan1968, online at home since Mar1970

Typing, Keyboards, Computers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Typing, Keyboards, Computers
Date: 20 July, 2023
Blog: Facebook
In junior high, I found a (antique, working) typewriter in the dump and taught myself touch typing. Before graduating to high school, they were replacing the typing class typewriters, and I manage to score one of their old typewriters (didn't have letters on the keys) to replace the ancient one that I found in the dump.

In college, took 2 credit hr intro to fortran/computers ... and spent some time punching cards on 026. At the end of the semester, was hired to redo 1401 MPIO for 360/30. univ. was sold 360/67 for tss/360 replacing 709/1401, 1401 was replaced temporarily with 360/30 (which had 1401 emulation, so my job was more of token gaining 360 experience). Within year of taking intro class, I was hired fulltime responsible for OS/360 (360/67 ran as 360/65, tss/360 never really came to production fruition). Univ. shutdown datacenter and I would have the whole place dedicated on weekends (although 48hrs w/o sleep, made mondays difficult). After being hired fulltime, got direct connected (hard-wired) 2741 in my office.

Then some people from Cambridge Science Center came out to install (virtual machine) CP67/CMS ... which mostly I would play with on my weekends time. CP67 had 2741&1052 terminal support with automagic terminal type identification. The univ. had some number of TTY/ASCII (mostly 33s, but couple 35s) ... so I added TTY/ASCII support to CP67 ... integrated with automagic terminal type-id. I then wanted to have single dial-in number for all terminal types
https://en.wikipedia.org/wiki/Line_hunting
didn't quite work, since the IBM terminal controller took short cut and hardwired line-speed. This somewhat motivation for univ. to do clone controller, building channel board for Interdata/3 programmed to emulate IBM controller ... with the addition to dynamically adapt line speed (in addition to terminal type). It was then upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for the port interfaces. Boxes were sold as clone controller by Interdata ... and later by Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

Four of us get written up as responsible for (some part of) IBM clone controller business.

Before graduating, I was hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services, consolidate all dataprocessing in an independent business unit to better monetize the investment, including offering services to non-Boeing entities (sort of early cloud). I thot Renton datacenter was possibly largest in the world, something like couple hundred million in IBM gear (one 360/75 and lots of 360/65s, new ones arriving faster than they could be installed, boxes constantly staged in hallways around the machine room). Lots of politics between Renton director and CFO (who only had 360/30 up at Boeing field for payroll, although they enlarge the machine room and install 360/67 for me to play with, when I'm not doing other stuff). Disaster plan to replicate Renton up at new 747 plant in Everett; scenario was Mt. Rainier heats up and the resulting mud slide takes out Renton datacenter.

When I graduate, I join science center and in addition to 2741 in my office, also get a 2741 at home. In 1977, home 2741 was replaced with 300 baud CDI miniterm, after a year or two, upgraded to IBM 3101 "glass teletype" w/1200baud modem before replaced with ibm/pc with (hardware encrypting capable) 2400 baud modem

360 plug compatible controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

some recent posts mentioning 709, 1401, mpio, os/360, cp/67, 360/67, boeing cfo, & renton datacenter
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#68 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#66 Economic Mess and IBM Science Center
https://www.garlic.com/~lynn/2023c.html#15 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#101 IBM Oxymoron
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#57 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#82 Boeing's last 747 to roll out of Washington state factory
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022h.html#24 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD

--
virtualization experience starting Jan1968, online at home since Mar1970

The Control Data 6600

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Control Data 6600
Date: 20 July, 2023
Blog: Facebook
The Control Data 6600
http://www.bitsavers.org/pdf/cdc/cyber/books/DesignOfAComputer_CDC6600.pdf

In Jan1979, was asked to run a national lab (from 6600) benchmark on engineering 4341 ... they were looking at getting 70 for compute farm (sort of leading edge of coming cluster supercomputing tsunami). Even tho the engineering 4341 had slowed down processor clock, it still benchmarked almost identical to 6600.

In 1980, started working with some of Thornton's people ... which continued off and on through the 80s and later

... topic drift ... end of ACS/360 ... claims executives were afraid that it would advance state-of-the-art too fast and IBM would loose control of the market (Amdahl leaves shortly later) ... also has some of the features that show up with ES/9000 in the 90s
https://people.cs.clemson.edu/~mark/acs_end.html

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some specific posts mentioning Thornton and NSC
https://www.garlic.com/~lynn/2022f.html#92 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#91 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#90 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2017k.html#63 SABRE after the 7090
https://www.garlic.com/~lynn/2014g.html#75 non-IBM: SONY new tape storage - 185 Terabytes on a tape
https://www.garlic.com/~lynn/2014c.html#80 11 Years to Catch Up with Seymour
https://www.garlic.com/~lynn/2013g.html#6 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games

some other posts mentioning 4341 rain/rain4 benchmarks
https://www.garlic.com/~lynn/2023c.html#68 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022h.html#42 computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022b.html#98 CDC6000
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021i.html#91 bootstrap, was What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021d.html#57 IBM 370
https://www.garlic.com/~lynn/2021d.html#35 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2021.html#76 4341 Benchmarks

--
virtualization experience starting Jan1968, online at home since Mar1970

Airline Reservation System

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Airline Reservation System
Date: 22 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#80 Airline Reservation System

My wife was in the gburg JES group when she is con'ed into going to POK to be in charge of loosely-coupled architecture (mainframe for cluster) where she did peer-coupled shared data architecture. She didn't remain long because of 1) repeated battles with the communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake, excpet for IMS hot-standby (until much later with SYSPLEX and Parallel SYSPLEX). She has story asking Vern Watts who he was going to ask permission of to do IMS hot-standby. He replies nobody, he would just tell them when it was all done. Vern had another problem with hot-standby, while IMS itself could fall-over in a couple minutes ... for a large terminal configuration, VTAM could take well over an hour (to get all the sessions back up and operating).

peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata

The last product we did at IBM was HA/CMP ... it started out HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAXcluster to RS/6000, I then rename it HA/CMP (High Availability Cluster Multi-Processing) when I start doing technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with the RDBMS vendors (Oracle, Informix, Ingres, Sybase). I then get asked to write a section for the corporate continuous availability strategy document ... it gets pulled after AS/400 (Rochester) and mainframe (POK) complain that they can't meet the requirements. Late Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (techical/scientific *ONLY*) and we are told we aren't allowed to work on anything with more than four processors. We leave IBM few months later.

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

About the same time being brought into largest res system ... was also brought into small client/server as consultant. Two of the former Oracle people we had worked with on HA/CMP scale-up, were there responsible for something called "commerce server" and wanted to do payment transactions on the server. The startup had also invented this technology they called "SSL" they wanted to use, the result is now frequently called "electronic commerce". I had responsibility for everything between webservers and the financial payment networks .... but could only make recommendations on the browser side. Example was payment network trouble desk required 5min 1st level problem determination. They had one of their 1st large customers have a problem and it was closed after three hours with NTF. I had to do a lot of software and procedures to get stuff up to payment network standards. Postel
https://en.wikipedia.org/wiki/Jon_Postel
then sponsors my talk "Why The Internet Isn't Business Critical Dataprocessing" ... based on the work I had to do for (payment networks side of) electronic commerce. I would needle them that to turn a well designed and implemented application into a service, it could take me 4-10 times the original application effort.

payment network gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
availability posts
https://www.garlic.com/~lynn/submain.html#availability
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

posts mentioning service requires 4-10 times effort
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021.html#13 Resilience and Sustainability
https://www.garlic.com/~lynn/2017j.html#42 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017i.html#18 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017.html#27 History of Mainframe Cloud
https://www.garlic.com/~lynn/2015e.html#16 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2014m.html#146 LEO
https://www.garlic.com/~lynn/2014m.html#86 Economic Failures of HTTPS Encryption
https://www.garlic.com/~lynn/2014f.html#25 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014f.html#13 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2012d.html#44 Faster, Better, Cheaper: Why Not Pick All Three?
https://www.garlic.com/~lynn/2011k.html#67 Somewhat off-topic: comp-arch.net cloned, possibly hacked
https://www.garlic.com/~lynn/2011i.html#27 PDCA vs. OODA
https://www.garlic.com/~lynn/2009.html#0 Is SUN going to become x86'ed ??
https://www.garlic.com/~lynn/2008n.html#35 Builders V. Breakers
https://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
https://www.garlic.com/~lynn/2008e.html#41 IBM announced z10 ..why so fast...any problem on z 9
https://www.garlic.com/~lynn/2007p.html#54 Industry Standard Time To Analyze A Line Of Code
https://www.garlic.com/~lynn/2007o.html#23 Outsourcing loosing steam?
https://www.garlic.com/~lynn/2007n.html#77 PSI MIPS
https://www.garlic.com/~lynn/2007h.html#78 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007f.html#37 Is computer history taught now?
https://www.garlic.com/~lynn/2006n.html#20 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/aadsm27.htm#48 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm25.htm#37 How the Classical Scholars dropped security from the canon of Computer Science

posts mentioning "Why the Internet Isn't Business Critical Dataprocessing" https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#94 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2018f.html#60 1970s school compsci curriculum--what would you do?
https://www.garlic.com/~lynn/2017j.html#42 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#47 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017d.html#92 Old hardware

--
virtualization experience starting Jan1968, online at home since Mar1970

5th flr Multics & 4th flr science center

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 5th flr Multics & 4th flr science center
Date: 23 July, 2023
Blog: Facebook
Some of the MIT CTSS/7094 people went to the 5th to do MULTICS, others went to the IBM science center on the 4th flr and did virtual machines (initially cp40/cms on 360/40 with hardware mods for virtual memory, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, later after decision to make all 370s with virtual memory morphs into VM370/CMS), internal network, lots of online and performance work. CTSS RUNOFF redone for CMS as SCRIPT and after GML was invented at science center in 1969, GML tag processing added to SCRIPT (after a decade, GML morphs into ISO standard SGML and after another decade morphs into HTML at CERN).

Some friendly rivalry between 4th & 5th flrs. One of MULTICS installations was USAFDC.
https://www.multicians.org/sites.html
https://www.multicians.org/mga.html#AFDSC
https://www.multicians.org/site-afdsc.html

In spring 1979, some from USAFDC wanted to come by to talk about getting 20 4341 VM370 systems. When they finally came by six months later, the planned order had grown to 210 4341 VM370 systems. Earlier in jan1979, I had been con'ed into doing a 6600 benchmark on an internal engineering 4341 (before product shipping to customers) for a national lab that was looking at getting 70 4341s for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami).

(4th flr) science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

posts mentioning Multics and USAFDC
https://www.garlic.com/~lynn/2022g.html#8 IBM 4341
https://www.garlic.com/~lynn/2022.html#17 Mainframe I/O
https://www.garlic.com/~lynn/2021c.html#48 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#55 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2020.html#38 Early mainframe security
https://www.garlic.com/~lynn/2018e.html#92 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2017c.html#53 Multics Timeline
https://www.garlic.com/~lynn/2015g.html#54 Mainframes open to internet attacks?
https://www.garlic.com/~lynn/2015f.html#85 Miniskirts and mainframes
https://www.garlic.com/~lynn/2013m.html#38 Quote on Slashdot.org
https://www.garlic.com/~lynn/2012i.html#44 Simulated PDP-11 Blinkenlight front panel for SimH
https://www.garlic.com/~lynn/2012g.html#5 What are the implication of the ongoing cyber attacks on critical infrastructure
https://www.garlic.com/~lynn/2012e.html#45 Word Length
https://www.garlic.com/~lynn/2010b.html#97 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2007b.html#51 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2006k.html#32 PDP-1
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation

--
virtualization experience starting Jan1968, online at home since Mar1970

545tech sq, 3rd, 4th, & 5th flrs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 545tech sq, 3rd, 4th, & 5th flrs
Date: 23 July, 2023
Blog: Facebook
some of the MIT CTSS/7094 people went to the 5th flr, project mac, and MULTICS. Others went to the IBM Science Center on the 4th flr and did virtual machines, originally CP40/CMS on 360/40 with hardware mods for virtual memory (they originally wanted a 360/50, but all spare 50s were going to FAA/ATC), it morphs into CP67/CMS when 360/67 standard with virtual memory became available (later morphs into VM370/CMS when decision was made to make all 370s "virtual memory) ... the internal network (also used for the corporate sponsored univ. BITNET), lots of interactive & performance technology ... also CTSS RUNOFF redone for CMS as SCRIPT and after GML was invented at science center in 1969, GML tag processing added to SCRIPT (after a decade, GML morphs into ISO standard SGML and after another decade morphs into HTML at CERN).

After I graduated and joined the science center, one of my hobbies was enhanced production operating systems for internal datacenters. Note in the morph from CP67 to VM370 ... lots of stuff was simplified and/or dropped ... and I spent some part of 1974 adding stuff back into VM370. I eventually got around adding multiprocessor support back into VM370 Release 3 ... and for some reason, AT&T Longlines made some deal with IBM to get copy of the CSC/VM prior to the multiprocessor support.

IBM Boston Programming Center was on the 3rd flr and did CPS ... contracting with Allen-Babcock for some amount of the work (including the CPS microcode assist for the 360/50).
http://www.bitsavers.com/pdf/allen-babcock/cps/
http://www.bitsavers.com/pdf/allen-babcock/cps/CPS_Progress_Report_may66.pdf

... note when the VM370 group split off from the science center ... they moved to the 3rd flr taking over BPC (and their people) ... and a version of CPS was done for VM370/CMS ... When they outgrow that space, they moved out to the vacant SBC building at Burlington Mall (on rt128).

Note in the early/mid 70s, IBM had the Future System project (totally different from 370 and was going to completely replace 370). During FS, internal politics was shutting down 370 efforts, the lack of new 370 during FS is credited with giving the clone mainframe makers their market foothold). When FS finally implodes, there is a mad rush getting stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. Some more details:
http://www.jfsowa.com/computer/memo125.htm

Originally, 3081 was never going to have a single processor version (would always be multiprocessor). A problem was that (airline) ACP/TPF operating system didn't have multiprocessor support (only ran on single processor machines). There was concern that the whole ACP/TPF market would move to the latest Amdahl single processor machine (had about the same processing power as the two processor 3081). I also got call from the IBM national marketing rep for AT&T. It turns out that over the years longlines made source code enhancements to my (early) CSC/VM (before multiprocessor support) and propagated it to several AT&T datacenters; and IBM was concerned that AT&T would also migrate to the latest Amdahl machine.

some Amdahl trivia ... end of ACS/360 ... some claims that executives shut it down because they were afraid it would advance state of the art too fast, and IBM would loose control of the market. Amdahl leaves shortly afterwards and starts his own company ... more detail (including features that show more than 20yrs later in the 90s with ES/9000
https://people.cs.clemson.edu/~mark/acs_end.html

other trivia: account of Learson failing to block the bureaucrats, careerists and MBAs from destroying the Watson legacy ... and 20yrs afterwards, in 1992 IBM has one of the largest loses in US corporate history and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, multiprocessor, and/or comapare&swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
runoff, script, gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning "The Big One" (3033, started out 168-3 logic remapped to 20% faster chips)
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023.html#92 IBM 4341
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#0 IBM's 3033
https://www.garlic.com/~lynn/2017c.html#30 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014h.html#6 Demonstrating Moore's law
https://www.garlic.com/~lynn/2014g.html#103 Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2012n.html#36 390 vector instruction set reuse, was 8-bit bytes
https://www.garlic.com/~lynn/2009g.html#70 Mainframe articles
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore

some posts mentioning Boston Programming Center & CPS:
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2016d.html#35 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#34 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2014f.html#5 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014f.html#4 Another Golden Anniversary - Dartmouth BASIC
https://www.garlic.com/~lynn/2014e.html#74 Another Golden Anniversary - Dartmouth BASIC
https://www.garlic.com/~lynn/2013l.html#28 World's worst programming environment?
https://www.garlic.com/~lynn/2013l.html#24 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013c.html#36 Lisp machines, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013c.html#8 OT: CPL on LCM systems [was Re: COBOL will outlive us all]
https://www.garlic.com/~lynn/2012o.html#72 AMC proposes 1980s computer TV series "Halt & Catch Fire"
https://www.garlic.com/~lynn/2012n.html#26 Is there a correspondence between 64-bit IBM mainframes and PoOps editions levels?
https://www.garlic.com/~lynn/2012e.html#100 Indirect Bit
https://www.garlic.com/~lynn/2010p.html#42 Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?
https://www.garlic.com/~lynn/2010e.html#14 Senior Java Developer vs. MVS Systems Programmer (warning: Conley rant)
https://www.garlic.com/~lynn/2008s.html#71 Is SUN going to become x86'ed ??

--
virtualization experience starting Jan1968, online at home since Mar1970

545tech sq, 3rd, 4th, & 5th flrs

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 545tech sq, 3rd, 4th, & 5th flrs
Date: 24 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs

addenda ... and science center datacenter was on 2nd flr

I took 2 credit hr intro to fortran/computers. At end of semester I was hired to reimplement 1401 MPIO in assembler for 360/30. Univ had 709 tape->tape with 1401 for unit record front-end. The univ was sold a 360/67 for TSS/360, replacing 709/1401. Pending 360/67, the 1401 was replaced with 360/30 (had 1401 emulation so could continue to run MPIO ... but I assume my job was part of getting 360 experience. I was given a bunch of hardware&software manuals and got to design & implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. The univ shutdown datacenter on weekends and I had the place dedicated (although 48 hrs w/o sleep made Monday classes hard) and within a few weeks I had 2000 card assembler program. trivia: learned to disassemble 2540 and clean reader/punch, clean 1403, clean tape drives 1st thing sat. also sometimes production had finished early and all the machines were powered off when I came in Sat. morning. I would hit 360/30 power-on and periodically it wouldn't complete power-up sequence. Some amount of trial&error, I learned to put all controllers in CE-mode, power on 360/30, and then individually power-on controllers before taking them out of CE-mode

Within a year of taking intro class, 360/67 had arrived and I was hired fulltime responsible for OS/360 (tss/360 never quite came to production fruition so ran as 360/65 with os/360). Student fortran jobs had run under a second on 709, initially they ran over a minute with OS/360. I install HASP and cuts the time in half. I then start reorginizing STAGE2 SYSGEN to carefully place datasets and PDS members to optimize arm seek and (PDS directory) multi-track search cutting time another 2/3rds to 12.9secs. Sometimes six months of PTF activity (replacing system PDS members) will nearly double the student fortran time and I will have to do a special system build to regain the optimization. Student fortran never gets better than 709 until I install (univ. of waterloo) WATFOR.

Then some people come out from science center to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) ... I get to mostly play with it during my dedicated weekend time ... and start rewriting lots of CP67 code. Initially a test of OS/360 running a batch job stream takes 322sec on bare machine and 856 running in CP67 virtual machine, addition of 534secs of CP67 CPU overhead. After six months playing with CP67, I've cut it to 113secs (reduction of 421secs).

Before I graduate, I'm hired fulltime into a small group in Boeing CFO office to help with the creation of Boeing Computer Services (move all dataprocessing to an independent business unit to better monetize the investment, also offering services to non-Boeing entities ... sort of early cloud). I think Renton datacenter is possibly largest in the world, couple hundred million in IBM 360 systems. A 360/75 and boat loads of 360/65s, 360/65s boxes faster than they can be installed, boxes constantly staged in the hallways around the machine room. The 360/75 sometimes ran classified jobs and there would be black rope around the 360/75 perimeter area with guards, thick black velvet over the console lights and the 1403 areas that exposed printed paper. Lots of politics between Renton director and CFO who only had 360/30 up at Boeing field for payroll (although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff). There are also a disaster plan to replicate Renton up at the new 747 plant in Everett ... Mt. Rainier heats up and the resulting mud slide takes out Renton datacenter. When I graduate, I leave Boeing and join the IBM science center.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning 709/1401, MPIO, WATFOR, Boeing CFO, & 360/75
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Remains America's Worst Big Tech Company

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Remains America's Worst Big Tech Company
Date: 24 July, 2023
Blog: Facebook
IBM Remains America's Worst Big Tech Company
https://247wallst.com/technology-3/2023/07/20/ibm-remains-americas-worst-big-tech-company-2/

Some trivia: account of Learson failing to block the bureaucrats, careerists and MBAs from destroying the Watson legacy ... and 20yrs later, in 1992, IBM has one of the largest loses in US corporate history and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3083

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3083
Date: 24 July, 2023
Blog: Facebook
(long winded & topic drift) 3083 trivia;

early/mid 70s, IBM had the future system project, completely different from 370 and was going to completely replace 370 ... and during FS, internal politics shutdown 370 projects (claim that the lack of new 370 during the period gave the clone 370 makers they market foothold). When FS finally implodes there is mad rush to get stuff back into the 370 product pipelines ... including kicking off the quick&dirty 3033&3081 efforts in parallel ... some more info:
http://www.jfsowa.com/computer/memo125.htm

originally 308x was only going to be multiprocessor machines ... however (at the time, airline) ACP/TPF operating system didn't have multiprocessor support and IBM was afraid the whole ACP/TPF market would move to Amdahl (Amdahl kept making single processor machines, and its latest single processor machine had about the same processing power as the two processor 3081K (the original 3081D had two processors, each about the same as 3033 for lots of benchmarks, the cache size was doubled for 3081K to imcrease processing). Eventually IBM comes out with the 3083 ... effectively a 3081 with one of the processors removed (trivia, 3081 2nd processor was in the middle of the box, just removing it would have left the box top heavy, they had to rewire the box to move the top processor to the middle for 3083). memo125 article references 3081 had 16 times the circuitry of comparable machines (which possibly motivated TCMs in order to reduce the physical size).

I got dragged into some of it. After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters. Then after decision to make all 370s "virtual memory" machines, there was decision to do VM370 (some of the people moved from the science center on the 4th flr to the 3rd flr taking over the IBM Boston Programming System). In the morph from CP67->VM370 lots of features were simplified and/or dropped. I then spent some amount of 1974 adding lots of stuff back into VM370 ... coming out with production CSC/VM early 1975 (but didn't yet have multiprocessor support). some old email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Somehow, AT&T longlines cut a deal with IBM to get a copy of my early CSC/VM system. Roll forward to early 80s, I'm tracked down by the IBM national marketing rep for AT&T. Turns out over the years, Longlines had made numerous source changes and propagated the system around AT&T. So IBM was concerned that AT&T market would also move to Amdahl (based on my pre-multiprocessor CSC/VM system, similar to concern about the ACP/TPF didn't have multiprocessor support). A later CSC/VM, I added multiprocessor support ... initially for the IBM world-wide, online sale&marketing support HONE system (which was a long-time customer).

HONE trivia: In mid-70s, the US HONE datacenters were consolidated in silicon valley ... upgrading to (I believe largest) "single-system-image", loosely-coupled complex with load-balancing and fall-over and large DASD farm. I then add multiprocessor so they can add a 2nd processor to each system (HONE1-HONE8 168s, and HONEDEV/HONE9 158). other trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the old consolidated US HONE datacenter.

Also in 1975, I had been sucked into working on multiprocessor 370/125-II five processor machine ... Endicott was concerned that it would overlap performance of new single processor 148 (and the 5 processor 125 got canceled) ... old email ref:
https://www.garlic.com/~lynn/2006w.html#email750827
I had also been sucked into working on the new 138/148 ... so I was asked to argue both sides in the escalation, archived post with original analysis for the 138/148 microcode assist
https://www.garlic.com/~lynn/94.html#21

370 virtual memory trivia: a decade+ ago I got asked if I could track down the decision to make all 370s "virtual memory" ... I found somebody that was staff to the executive. Basically MVT storage management was so bad that regions had to have size specified four times larger than actually used. As a result a typical 1mbyte, 370/165 would run only four regions concurrently ... insufficient to keep the machine busy and justified. Mapping MVT to 16mbyte virtual memory (similar to running MVT in a CP67 16mbyte virtual machine) would allow increasing number of concurrently running regions by a factor of four times with little or no paging. pieces of the email exchange in this archived post
https://www.garlic.com/~lynn/2011d.html#73

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
smp, multiprocessor, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
bounce lock/VAMPS/125-II posts
https://www.garlic.com/~lynn/submain.html#bounce

some recent posts mentioning 3083
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2022h.html#117 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022e.html#97 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022b.html#94 Computer BUNCH
https://www.garlic.com/~lynn/2022.html#94 VM/370 Interactive Response
https://www.garlic.com/~lynn/2021k.html#119 70s & 80s mainframes
https://www.garlic.com/~lynn/2021.html#72 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#55 IBM Quota
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers
https://www.garlic.com/~lynn/2021.html#39 IBM Tech

some recent posts mentioning AT&T Long Lines
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2022e.html#97 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021b.html#80 AT&T Long-lines

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3083

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3083
Date: 24 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083

Account of Learson failed in blocking the bureaucrats, careerists, and MBAs destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... mentions several things over next 20yrs (including story about the 1st true blue commercial customer ordering Amdahl), until 20yrs later, IBM has one of the largest losses in history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

from Ferguson Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

... and one of the last nails in the FS coffin was analysis by the Houston Science Center that if a 370/195 application was redone for an FS machine made out of the fastest available technology, it would have throughput of 370/145 (about 30times slow down). In the FS period, my wife reported to the head of one of the FS "sections" ... she has commented that in FS meetings with the other "sections" ... there was little content on how things would actually be implemented.

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

other recent posts mentioning Amdahl
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#63 CICS Product 54yrs old today
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023d.html#12 Ingenious librarians
https://www.garlic.com/~lynn/2023d.html#10 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#97 Fortran
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2023c.html#61 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#56 IBM Empty Suits
https://www.garlic.com/~lynn/2023c.html#8 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#98 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#84 Clone/OEM IBM systems
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#45 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices

--
virtualization experience starting Jan1968, online at home since Mar1970

The Admissions Game

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Admissions Game
Date: 25 July, 2023
Blog: Facebook
The Admissions Game
https://www.nakedcapitalism.com/2023/07/the-admissions-game.html

Back in 1813, Thomas Jefferson and John Adams exchanged a series of letters on what would later come to be called metritocracy. Jefferson argued for a robust system of public education so that a "natural aristocacy" based on virtue and talents could be empowered, rather than an "artificial aristocracy founded on wealth and birth." Adams was skeptical that one could so easily displace an entrenched elite:

Aristocracy, like Waterfowl, dives for Ages and then rises with brighter Plumage. It is a subtle Venom that diffuses itself unseen, over Oceans and Continents, and tryumphs over time... it is a Phoenix that rises again out of its own Ashes.

The findings in the paper appear to vindicate Adams.


... snip ...

The Causal Effects of Admission to Highly Selective Private Colleges
https://marginalrevolution.com/marginalrevolution/2023/07/the-causal-effects-of-admission-to-highly-selective-private-colleges.html
Affirmative action for rich kids: It's more than just legacy admissions
https://www.npr.org/sections/money/2023/07/24/1189443223/affirmative-action-for-rich-kids-its-more-than-just-legacy-admissions

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

misc. posts mentioning Thomas Jefferson:
https://www.garlic.com/~lynn/2022g.html#15 Pro-Monarch
https://www.garlic.com/~lynn/2022.html#106 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2021j.html#72 In U.S., Far More Support Than Oppose Separation of Church and State
https://www.garlic.com/~lynn/2021i.html#59 The Uproar Ovear the "Ultimate American Bible"
https://www.garlic.com/~lynn/2021f.html#98 No, the Vikings Did Not Discover America
https://www.garlic.com/~lynn/2021f.html#46 Under God
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2019e.html#161 Fascists
https://www.garlic.com/~lynn/2019e.html#158 Goliath
https://www.garlic.com/~lynn/2019e.html#150 How Trump Lost an Evangelical Stalwart
https://www.garlic.com/~lynn/2019e.html#127 The Barr Presidency
https://www.garlic.com/~lynn/2019b.html#43 Actually, the Electoral College Was a Pro-Slavery Ploy
https://www.garlic.com/~lynn/2019.html#44 People are Happier in Social Democracies Because There's Less Capitalism
https://www.garlic.com/~lynn/2019.html#4 Noncompliant: A Lone Whistleblower Exposes the Giants of Wall Street
https://www.garlic.com/~lynn/2018f.html#78 A Short History Of Corporations
https://www.garlic.com/~lynn/2018f.html#9 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018f.html#8 The LLC Loophole; In New York, where an LLC is legally a person, companies can use the vehicles to blast through campaign finance limits
https://www.garlic.com/~lynn/2018e.html#107 The LLC Loophole; In New York, where an LLC is legally a person
https://www.garlic.com/~lynn/2017k.html#31 The U.S. was not founded as a Christian nation
https://www.garlic.com/~lynn/2017k.html#5 The 1970s engineering recession
https://www.garlic.com/~lynn/2017i.html#40 Equality: The Impossible Quest
https://www.garlic.com/~lynn/2017.html#4 Separation church and state
https://www.garlic.com/~lynn/2016h.html#83 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016h.html#54 CFTC Reproposes Position Limits Rule
https://www.garlic.com/~lynn/2016f.html#0 IBM is Absolutely Down For The Count
https://www.garlic.com/~lynn/2016c.html#37 Qbasic
https://www.garlic.com/~lynn/2013f.html#20 What Makes weapons control Bizarre?

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM mainframe: How it runs and why it survives

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The IBM mainframe: How it runs and why it survives
Date: 25 July, 2023
Blog: Facebook
The IBM mainframe: How it runs and why it survives
https://arstechnica.com/information-technology/2023/07/the-ibm-mainframe-how-it-runs-and-why-it-survives/amp/

Jan1979, I get con'ed into doing benchmark (from cdc6600) on (engineering, before first customer ship) 4341, for national lab that was looking at getting 70 for compute farm (sort of leading edge of the coming cluster supercomputing tsunami). The engineering 4341 had slowed down processor clock but still benchmarked almost same as cdc6600. The price/performance was significantly better than any high-end POK mainframe and POK was so threatened by Endicott that they tried to get corporate to cut the allocation of a critical 4341 manufacturing component in half. Furthermore a cluster of 4341s was much cheaper than 3033, had higher performance, took up less floor space, power, and cooling.

Later large corporations were ordering hundreds of vm/4341s at a time for deployment in departmental areas (with 3370 FBA) ... sort of the leading edge of the distributed departmental computing tsunami. Inside IBM, departmental conference rooms were becoming scarce commodity since so many were repurposed for distributed vm4341s

... topic drift ... then there is demise of ACS/360 ... claims were executives were afraid that it would advance state-of-the-art too fast and IBM would loose control of the market (Amdahl leaves shortly later) ... also has some of the features that show up more than 20 years later with ES/9000 in the 90s
https://people.cs.clemson.edu/~mark/acs_end.html

some recent posts mentioning 4341 benchmarks for national lab
https://www.garlic.com/~lynn/2023d.html#86 5th flr Multics & 4th flr science center
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#81 Taligent and Pink
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023d.html#3 IBM Supercomputer
https://www.garlic.com/~lynn/2023c.html#90 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#58 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#29 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023c.html#19 IBM Downfall
https://www.garlic.com/~lynn/2023c.html#7 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023.html#110 If Nothing Changes, Nothing Changes
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#52 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM mainframe: How it runs and why it survives

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The IBM mainframe: How it runs and why it survives
Date: 25 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives

re: upward compatability; except the disastrous detour for "Future System" in the 1st half of the 70s, was completely different than 370 and was going to completely replace 370. during FS, internal politics was kiilling off 370 projects (claims the lack of new 370 during FS is credited with giving the clone 370 makers their market foothold). Eventually FS implodes and there is mad rush getting stuff back into the 370 product pipelines, including kicking off the quick&dirty 303x & 3081 efforts. some more detail
http://www.jfsowa.com/computer/memo125.htm

I continued to work on 360&370 all during the FS period, even periodically ridiculing what they were doing ... which wasn't a career enhancing activity (being told that the only promotions&raises were in FS).

from Ferguson Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

... one of the last nails in the FS coffin was analysis by the Houston Science Center that if a 370/195 application was redone for a FS machine made out of the fastest available technology, it would have throughput of 370/145 (about 30times slow down).

Amdahl departed IBM shortly after ACS/360 was killed (some claims that executives killed the project because they were afraid it would advance the state-of-the-art too fast and IBM would loose control of the market)
https://people.cs.clemson.edu/~mark/acs_end.html

shortly after Amdahl started his company making clone 370s, he gave a talk at MIT in large auditorim ... some of us from the science center went over. He was asked what justification did he use to get investment in his company. He said he told them that even if IBM was to totally walk away from 370, customers had so much invested in 360/370 software, it would keep his company in business for decades. It seemed to imply he was aware of FS, but in later years he claimed he had no knowledge about FS at the time.

some account of Learson failed in blocking the bureaucrats, careerists and MBAs from destroying Watson legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

370/148 masthead/banner

From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/148 masthead/banner
Date: 25 July, 2023
Blog: Facebook
banner/masthead, 13 striper, 15 striper (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20070830203004/http://ibmcollectables.com/ibmlogo.html
https://web.archive.org/web/20070830203502/http://ibmcollectables.com/IBMlogo13.html
https://web.archive.org/web/20080113020030/http://ibmcollectables.com/IBMlogo15.html

After "future system" implodes, there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 303x & 3081 efforts in parallel. some ref
http://www.jfsowa.com/computer/memo125.htm

Endicott cons me into helping with virgil/tully (138/148) and vm370 ECPS microcode assist ... archive post with some early ECPS analysis
https://www.garlic.com/~lynn/94.html#21

and an effort to have vm370 preinstalled on every 138/148 shipped. However, the head of POK (high-end mainframes) was in process of convincing corporate to kill the vm370 product, shutdown the development group and transfer all the people to POK (to work on MVS/XA, claiming otherwise, MVS/XA wouldn't ship on time). They weren't planning on telling the people until just prior to shutdown/move in order to minimize the number that might escape. The information leaked early and several managed to escape into the Boston area (joke was head of POK was major contributor to the new DEC VMS effort). There was hunt for the leaker ... but nobody gave them up (fortunately for me).

Endicott managed to acquire the vm370 product mission, but they had to reconstitute a development group from scratch ... however they weren't able to get corporate permission to pre-install vm370 on every 138/148 shipped.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

recen posts mentioning POK killing vm370 product, 138/148, ECPS
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#79 IBM TLA
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#64 Another 4341 thread
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022g.html#68 Datacenter Vulnerability
https://www.garlic.com/~lynn/2022g.html#65 IBM DPD
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022d.html#60 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2022.html#98 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#33 138/148
https://www.garlic.com/~lynn/2022.html#29 IBM HONE

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM mainframe: How it runs and why it survives

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The IBM mainframe: How it runs and why it survives
Date: 25 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives

Parallel access volumes
https://www.ibm.com/docs/en/ds8880/8.4?topic=volumes-parallel-access

In 1980, STL (now silicon valley lab) was bursting at the seams and they were moving 300 people from the IMS group to offsite bldg, with dataprocessing service back to the STL datacenter. They had tried "remote 3270" ... but found the human factors totally unacceptable. I get con'ed into doing channel-extender support so local channel attached 3270 controllers can be placed at the offsite bldg ... with no perceived difference in human factors inside STL and offsite. The hardware vendor wants IBM to release my support, but there are some engineers in POK playing with some serial stuff, who were afraid if my stuff was in the market, it would make it harder to get their stuff released (and get approval vetoed).

In 1988, the IBM branch office asks if I can help LLNL get some serial stuff they are playing with, standardized ... which quickly becomes fibre channel standard ("FCS", including some stuff I had done in 1980), initially full-duplex 1gbit/sec, 2gbit/sec aggregate (200mbyte/sec). Finally in the 90s, the POK engineers get their stuff released with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec). Then some POK engineers become involved in FCS and define a heavy weight protocol that radically cuts the native throughput, that is eventually released as FICON.

The most recent published benchmark I can find is z196 "PEAK I/O" getting 2M IOPS with 104 FICON (running over 104 FCS). About the same time a FCS was announced for E5-2600 blade claiming over million IOPS (two such FCS getting higher throughput than 104 FICON running over 104 FCS).

note also there are no real CKD DASD manufactured for decades, all being simulated on industry standard fixed-block disks. Z196 trivia: max configured z196 had 50BIPS rating and went for $30M ($600,000/BIP). By comparison E5-2600 blades had 500BIPS rating (ten times z196) and IBM had base list price of $1815 ($3.60/BIP, this was before IBM sold off that server business).

Claim can be made that PAV is required for the short comings of CKD DASD (now purely emulation instead of using the real disks directly) and the FICON protocol (instead of native fibre-channel standard).

STL trivia: 3270 channel attached terminal controllers had been spread across all the channels, shared with disk controllers. Moving the controllers to the channel extender increased system throughput by 10-15% (the channel-extender significantly reduced the channel busy for running 3270 channel programs and doing 3270 I/O, improving disk I/O). There was some consideration to deploy channel-extender for all 3270 channel attached controllers (even purely local within the bldg) because of the improvement in mainframe channel efficiency and system throughput.

The last product we did at IBM was HA/CMP. It started out as HA/6000 for the NYTimes to move their newspaper system (ATEX) off DEC VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Informix, Ingres, Sybase). I also make sure that FCS capable of handling large concurrent activity from large HA/CMP clusters. End of Jan1992, cluster scale-up is transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
ficon (, fcs) posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

some recent z196 "peak i/o" benchmark
https://www.garlic.com/~lynn/2023d.html#61 mainframe bus wars, How much space did the 68000 registers take up?
https://www.garlic.com/~lynn/2023d.html#43 AI Scale-up
https://www.garlic.com/~lynn/2023d.html#38 IBM 3278
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#20 IBM Technology
https://www.garlic.com/~lynn/2023.html#89 IBM San Jose
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#113 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#89 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#72 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022e.html#26 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#18 3270 Trivia
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#66 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#57 Fujitsu confirms end date for mainframe and Unix systems
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#84 Mainframe Benchmark
https://www.garlic.com/~lynn/2022.html#76 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#69 IBM Bus&Tag Channels
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM mainframe: How it runs and why it survives

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The IBM mainframe: How it runs and why it survives
Date: 26 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#96 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives

... while doing HA/CMP, I was asked to do a section for corporate continuous availability strategy document ... but the section got pulled with both AS/400 (Rochester) and mainframe (POK) complained they couldn't meet the requirements ... speculate that is part of the reason that cluster scale-up was transferred (for technical/scientific *ONLY*) ... as well as complaints from the (mainframe) DB2 group complaints that if I was allowed to continue (with commercial cluster scale-up), it would be at least 5yrs ahead of them. For earlier HA/CMP we worked with Hursley with their 9333 disk system, originally full-duplex 80mbit/sec serial copper (using packetized SCSI commands). I had wanted it to morph into 1/8th speed interoperate with FCS ... but instead it morphs into 160mbit, incompatible SSA (after we depart)
https://en.wikipedia.org/wiki/Serial_Storage_Architecture
evolving into sustained 60mbytes/sec non-RAID and 35mbytes/sec RAID

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#availability

Note after transferring to SJR in the 70s, I was allowed to wander around most IBM & customer datacenters is silicon valley, including bldg14 (disk engineering) and bldg15 (disk product test) across the street. They were running, preschedule, 7x24, around the clock, stand-alone testing and mentioned that they had recently tried MVS, but it had 15miin mean-time-between failure (in that environment). I offered to rewrite I/O supervisor to make it bullet proof and never fail, allowing any amount of ondemand, concurrent testing (greatly improving productivity). I then wrote a (internal only) research report on the work and happened to mention MVS 15min MTBF ... bringing down the wrath of the MVS group on my head (informally I was told they tried to have me separated from the company). Later, before 3380s were was shipping, FE had regression test of 57 simulated errors that were likely to occur, MVS was still failing in all cases, and in 2/3rds there was no indication of what caused MVS to fail (joke about MVS "recovery", repeatedly covering up before finally failing) ... and I didn't feel badly.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

a few past posts mentioning MVS failing for FE 3380 error regression
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022e.html#10 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#97 MVS support
https://www.garlic.com/~lynn/2022d.html#11 Computer Server Market
https://www.garlic.com/~lynn/2022.html#44 Automated Benchmarking
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2018d.html#86 3380 failures
https://www.garlic.com/~lynn/2010n.html#15 Mainframe Slang terms
https://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style"

I had also been sucked into working with Jim Gray and Vera Watson on System/R (original SQL/Relational implementation) and then involved in tech transfer to Endicott for SQL/DS ... "under the radar" while company was pre-occupied with the next great DBMS "EAGLE". Then when "EAGLE" implodes a request was made for how fast could it be ported to MVS ... which is eventually released as "DB2" (originally for decision/support only). trivia: when Jim Gray left for Tandem, he was palming off IMS DBMS consulting on me as well as supporting the BofA pilot for System/R on 60 distributed vm/4341s.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr

and some FCS&FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning 9333, SSA, & FCS
https://www.garlic.com/~lynn/2022e.html#47 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2021k.html#127 SSA
https://www.garlic.com/~lynn/2021g.html#1 IBM ESCON Experience
https://www.garlic.com/~lynn/2019b.html#60 S/360
https://www.garlic.com/~lynn/2019b.html#57 HA/CMP, HA/6000, Harrier/9333, STK Iceberg & Adstar Seastar
https://www.garlic.com/~lynn/2013m.html#99 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#96 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012k.html#77 ESCON
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010j.html#62 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010.html#44 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009s.html#31 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2009q.html#32 Mainframe running 1,500 Linux servers?
https://www.garlic.com/~lynn/2008p.html#43 Barbless
https://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
https://www.garlic.com/~lynn/2006p.html#46 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2002j.html#15 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/95.html#13 SSA

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM DASD, Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM DASD, Virtual Memory
Date: 27 July, 2023
Blog: Facebook
The univ. had been sold 360/67 for tss/360, to replace 709/1401 and the 1401 was temporarily replaced with 360/30. I took 2 credit hr into to fortran/computers ... and at the end of semester I was hired to rewrite 1401 MPIO for 360/30 (360/30 had 1401 emulation, so could continue to run 1401 MPIO, so I guess my activity was just getting 360 experience). The univ shutdown datacenter on weekends and I would have to the whole place dedicated to myself (although 48rs w/o sleep made Monday classes hard). I was given a pile of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error retry, storage management, etc ... within a few weeks, I had a 2000 card assembler program. Within a year of taking intro class, 360/67 arrived and I was hired fulltime responsible for OS/360 (tss/360 never came to production fruition so ran as 360/65). The 360/67 came with 2301 drum (for tss/360 paging) ... but manage to cram sys1.svclib on the 2301.

Later, people came out from Cambridge Science Center to install (virtual machine) CP67 (3rd installation after CSC itself and MIT Lincoln labs) ... and I got to play with it on my 48hr weekend dedicated time. That CP67 did DASD FIFO I/O queuing and paging was FIFO and single transfer/IO. I added change it to ordered seek queuing for 2314, and multi-page I/O transfers with rotational queuing ... everything on same (2314) cylinder and everything in the queue for 2301 drum. I got the 2301 DRUM went to go from around 75 page transfers/sec "peak" to capable of 270 page transfers/sec

2841 could control 2303 ... but needed special controller for 2301 because of much higher (over mbyte/sec) transfer rate. Basically 2301 was a 2303 that read/wrote four tracks in parallel (four times transfer rate) ... 1/4th as many tracks and each track four times larger. CP67 used the TSS/360 2301 format with nine 4k pages formated per pair of tracks (four pages on first of pair with 5th page spanning the end of the first track and the start of the 2nd track).
https://en.wikipedia.org/wiki/IBM_drum_storage
https://en.wikipedia.org/wiki/IBM_drum_storage#IBM_2301
https://en.wikipedia.org/wiki/IBM_drum_storage#IBM_2303
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives

After graduating, I joined the science center ... then 370 was announced and then decision was made to make all 370s virtual memory. A decade+ ago, I was asked to track down the decision to make all 370s virtual memory .... basically MVT storage management was so bad, that regions sizes had to be specified four times larger than used; as a result typical 1mbyte 370/165 customer could only run four regions concurrently (insufficient to keep machine busy and justified), mapping MVT to 16mbyte virtual address space (similar to running MVT in CP67 16mbyte virtual machine), would allow increasing number of concurrently running regions by four times with little or no paging. old archived post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

A joint, distributed development project was started between CSC & Endicott (over the burgeoning internal network originated at CSC) to modify CP67 ("CP67H") to add machine simulation for 370 & 370 virtual memory architecture. Then additional modifications were made to run CP67 ("CP67I") on a 370 virtual memory machine. This was in regular use running CP67I, in a CP67H 370 virtual machine ... which ran in a 360/67 virtual machine on "CP67L" on the real CSC 360/67 a year before the first engineering 370/145 was operational with virtual memory (the extra layer was for security not wanting the unannounced 370 virtual memory leaking since CSC CP67L was also in use by professors, staff, and students at Boston area univ). After engineering 370s with virtual memory started appearing, some engineers from San Jose added 3330 and 2305 device support to "CP67I" for "CP67SJ"
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3330
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_2305
note 2302 is more like a 2305 fixed-head per track disk than 2301/2303 fixed-head drum
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_2302

Note "CP67SJ" was in extensive use ... even after the "official" VM370 product became available (in the morph from CP67->VM370 they greatly simplified and/or dropped lots of CP67). One of my hobbies after joining IBM was enhanced production operating system for internal datacenters ... and I spend some amount of 1974 upgrading VM370 to production status for "CSC/VM" internal distribution. Some old email
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Other trivia: circa 1980, IBM contracted with a vendor for electronic disks that simulated 2305 drums, given model number "1655". They could also be configured as a FBA with 3mbyte/sec transfer.

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some posts mentioning optimization disk being to head switch between tracks within rotational time.
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2013c.html#74 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2010m.html#15 History of Hard-coded Offsets

some other mainframe I/O posts
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

some posts mentioning 2301, 2303, 2305, 1655
https://www.garlic.com/~lynn/2022d.html#46 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018.html#81 CKD details
https://www.garlic.com/~lynn/2017k.html#44 Can anyone remember "drum" storage?
https://www.garlic.com/~lynn/2017d.html#65 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2011j.html#9 program coding pads
https://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2005r.html#51 winscape?
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes

other posts mentioning CP67L, CP67H, CP67I, CP67SJ
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2018e.html#86 History of Virtualization
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2013.html#71 New HD
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?

--
virtualization experience starting Jan1968, online at home since Mar1970

Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'

From: Lynn Wheeler <lynn@garlic.com>
Subject: Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'
Date: 27 July, 2023
Blog: Facebook
Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'. The Heritage Foundation's Mandate for Leadership would serve as a policy blueprint for the first 180 days if Trump--or another Republican--were to gain control of the White House in January 2025.
https://www.commondreams.org/news/right-wing-think-tank-anti-climate

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality

some past posts mentioning Heritage Foundation
https://www.garlic.com/~lynn/2023d.html#41 The Architect of the Radical Right
https://www.garlic.com/~lynn/2023c.html#51 What is the Federalist Society and What Do They Want From Our Courts?
https://www.garlic.com/~lynn/2022g.html#37 GOP unveils 'Commitment to America'
https://www.garlic.com/~lynn/2022c.html#118 The Death of Neoliberalism Has Been Greatly Exaggerated
https://www.garlic.com/~lynn/2021f.html#63 'A perfect storm': Airmen, F-22s struggle at Eglin nearly three years after Hurricane Michael
https://www.garlic.com/~lynn/2021e.html#88 The Bunker: More Rot in the Ranks
https://www.garlic.com/~lynn/2020.html#5 Book: Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America
https://www.garlic.com/~lynn/2019d.html#97 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019c.html#66 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019.html#34 The Rise of Leninist Personnel Policies
https://www.garlic.com/~lynn/2012c.html#56 Update on the F35 Debate
https://www.garlic.com/~lynn/2012b.html#75 The Winds of Reform
https://www.garlic.com/~lynn/2012.html#41 The Heritage Foundation, Then and Now

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3083

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3083
Date: 28 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#91 IBM 3083

As undergaduate in 60s and responsible for os/360 at univ that had been sold 360/67 for tss/360 ... had some exposure by local IBMers that had tss/360 training (and would claim I learned what not to do for single-level-store from tss/360 ... which was very similar to design used for FS and then s/38).

When I graduate, I joined ibm science center and did a lot of work on cp67/cms ... including implementing a paged-mapped filesystem for cms during the FS period ... that avoided the FS single-level-store short comings (much of it coming from tss/360) When FS people visited CSC to talk technology ... I would ridicule lots of what they were doing.

FS gave any sort of filesystem that was even remotely page-mapped, such a bad reputation ... that I was never able to approval to ship to customers (even though it gave 3-5 times performance boost to CMS filesystem). Tom Simpson (of HASP fame) had done something similar for OS/360 filesystem, he called "RASP", met similar opposition after decision to make all 370s "virtual memory".

Trivia: one of my hobbies after joining IBM was enhanced production systems for internal datasystems (even the online world-wide sales&marketing HONE system was long time customer as well as Rochester) ... and eventually migrated everything to VM370. Some of it was even picked up for customer release (the "wheeler scheduler" aka dynamic adaptive resource management, paging algorithms, pathlength rewrite/optimization, etc ... but not cms paged-mapped filesystem)

Tom Simpson left IBM not long later and went to work for Amdahl ... were he recreated RASP (in clean room). IBM sued Amdahl about it ... even though IBM had no intention of doing anything with RASP ... and there was independent code analysis that could find only a couple short code sequences that might be considered similar to the original RASP.

Some other reference
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
HASP, ASP, JES2, JES3 posts
https://www.garlic.com/~lynn/submain.html#hasp
dynamic adapive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
paging posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Operating System/360

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Operating System/360
Date: 28 July, 2023
Blog: Facebook
At the univ, they were sold 360/67 (to replace 709/1401) for tss/360 ... and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition so ran as 360/65). My first SYSGEN was MFT9.5; then I started tearing part STAGE2 so I could run in jobstream w/HASP (instead of starter system) and careful dataset placement and PDS member ordering for optimized arm seek and (PDS directory) multi-track search. Never SYSGEN'ed MVT until (combined) release 15/16. Student fortran jobs ran under second on 709, initially on OS/360, they ran over a minute; and after installing HASP cut the time in half. Cut another 2/3rds (to 12.9secs) with optimized PDS member&dataset placement. Then sometimes there was so much PTF activity adversely affecting my careful ordering and degrading student fortran time, that I would have to do a SYSGEN (w/o waiting for next release). Never ran better than 709 until installing Univ. of Waterloo WATFOR. trivia: once when I had some 3rd shift test time at a IBM regional datacenter, I wondered around the region bldg and found a MVT debugging class and asked if I could sit in. After about 20mins the instructor asked me to leave because I kept suggesting better debugging.

Then before I graduate, was hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Comuter Services (consolidate all dataprocessing into an Indpendent Business Unit/IBU to better monetize the investment, including offering services to non-Boeing entities, sort of early cloud). I thought Renton datacenter possibly largest in the world, couple hundred million in IBM 360s, IBM 360/75 and whole slew of 360/65s, with more 360/65s arriving, boxes constantly staged in hallways around machine room. 360/75 sometimes ran classified work when black rope and guards places around the 360/75 area and heavy black felt drapped over front console lights and parts of 1403s where printing was visible. Lots of politics between Renton director and CFO, who only had 360/30 up at Boeing Field for payroll (although they enlarge the machine room to add a 360/67 for me to play with when I wasn't doing other stuff). A disaster scenario has Renton datacenter being replicated up at new 747 plant in Everett ... Mt Rainier heats up and the resulting mud slide takes out Renton. When I graduate, I join the IBM Cambridge Science Center (instead of staying at Boeing & BCS).

R15/16 trivia ... DASD formatting was enhanced so the location for VTOC could be specified. VTOC was heavily accessed ... new dataset on mostly full required disk arm moving to VTOC and then back to other end of disk. Placing VTOC closer to the middle could help reduce avg disk arm travel.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent mentioning 709, 1401, os/360, sysgen, watfor
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#79 IBM System/360 JCL
https://www.garlic.com/~lynn/2023d.html#72 Some Virtual Machine History
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#64 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#14 Rent/Leased IBM 360
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#88 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#62 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards

other recent posts mentioning Boeing CFO and BCS
https://www.garlic.com/~lynn/2023d.html#66 IBM System/360, 1964
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#15 Boeing 747
https://www.garlic.com/~lynn/2023c.html#86 IBM Commission and Quota
https://www.garlic.com/~lynn/2023c.html#66 Economic Mess and IBM Science Center
https://www.garlic.com/~lynn/2023c.html#15 IBM Downfall
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2023.html#58 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#57 Almost IBM class student
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2023.html#5 1403 printer

--
virtualization experience starting Jan1968, online at home since Mar1970

Typing, Keyboards, Computers

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Typing, Keyboards, Computers
Date: 28 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers

DEC VAX/VMS sold into the same mid-range market as IBM 4300s ... and in about the same numbers for small number orders. Big difference was large corporations with orders of hundreds of VM/4300s for placement out in departmental areas ... sort of leading edge of the coming distributed computing tsunami. Old archived post of VAX sales, sliced and diced by model, year, us/non-us:
https://www.garlic.com/~lynn/2002f.html#0
by mid-80s, it can be seen that server PCs and workstations were starting to take-over the mid-range market.

Follow-ons to 4331&4341 were 4361&4381, which IBM expected to see the same explosion in sales as 4331&4341 ... but they suffered same 2nd half 80s fate as VAX.

Trivia: after IBM "Future System" project (replacing *ALL* mainframes and totally different) imploded in the mid-70s ... there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts. The head of (high-end mainframe) POK was also convincing corporate to kill the VM370 product, shutdown the VM370 development group (out at Burlington Mall off rt128), and transfer everybody to POK to work on MVS/XA (or supposedly MVS/XA wouldn't ship on time). They weren't planning on telling the people until just before the shutdown/move, hoping to minimize the number that might escape. The information leaked early and several manage to escape (joke that head of POK was contributor to the infant VAX/VMS effort). There was a hunt for the leak, fortunately for me, nobody gave up the leaker. Endicott (responsible for mid-range) managed to acquire the VM/370 product mission, but had to reconstitute a VM/370 development group from scratch.

Other trivia: In Jan1979, I was con'ed into doing benchmarks for national labs that was looking at getting 70 vm/4341s for compute farm ... sort of leading edge of the coming cluster supercomputing tsunami.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

some recent posts mentioning 4300 distributed and cluster supercomputing tsunamis
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#86 5th flr Multics & 4th flr science center
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2

--
virtualization experience starting Jan1968, online at home since Mar1970

The IBM mainframe: How it runs and why it survives

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The IBM mainframe: How it runs and why it survives
Date: 29 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#94 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#96 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives

First ran into where lots of channel paths were needed (and later PAV) for the significant channel busy because of the way channel protocol chatter worked ... was with the STL 1980 channel-extender (mentioned previously). All the 3270 channel attached controllers were spread across all the channels with 3830 disk controllers. Moving all the 3270 controllers (to offsite bldg) and off the (real) mainframe channels improved system throughput by 10-15% (i.e. channel-extender boxes significantly cut the channel busy for same amount of 3270 terminal operations). There was some discussion about adding channel-extenders to all STL systems (for 3270 controllers physically in the datacenter, to get the 10-15% increase in throughput for all systems).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

The next time was with 3090. The new 3880 disk controller supported (with special hardware path for) 3mbyte (3380) transfer but had a significantly slower microprocessor for all other operations. The 3090 group had configured number of channels to meet target throughput assuming the 3880 controller was same as 3830 controller but with capability for 3mbyte/sec transfer. When they found out how bad the 3880 controller channel busy really was, they realized they had to significantly increase the number of channels (channel paths) to offset the 3880 channel busy significantly worse than anticipated. The increase in number of channels also required adding additional TCM ... the 3090 group semi-facetiously said they were going to bill the 3880 controller group for the increase in 3090 manufacturing cost. Note, eventually marketing respun the significant increase in number of 3090 channels as it being a wonderful I/O machine (as opposed to required to meet target throughput compensating for the 3880 channel busy overhead).

Note: first ran into 3880 extreme channel busy was after we deployed our own online service on the bldg15 engineering 3033 (dasd testing only took 1-2% cpu, so had lots of spare capacity, scraped together a 3830 and 3330 string), get an early Monday morning call asked what had I done to the 3033 over the weekend, interactive response and throughput had extremely deteriorated. After a lot of back&forth, found that somebody had replaced the online service 3830 disk controller with engineering 3880. The 3880 was trying to mask extremely slow microprocessor and big increase in channel program processing overhead by signalling I/O complete (CE+DE) "early" and expecting to finish overhead overlapped with operating system overhead (before driving next I/O).

It turns out in live environment, there typically could be waiting queued requests (rather than single thread/task with application needing to generate next I/O). As a result, the 3880 had to respond to SIOF with SM+BUSY (controller busy), the system requeued the pending request, and then system had to wait for CUE (controller end) interrupt to retry the pending request again.

3880 processing then 1) tried to speed up controller processing by caching information for current path, however if next request was a different path, then it was back to much slower processing and higher channel busy and 2) in SM+BUSY case of presenting early I/O interrupt, just hold the SIOF request until controller was really ready for processing. I had actually done some software super pathlength processing for (channel) multi-path load balancing ... but it had to be crippled for 3880 (because the speed up of 3880 single path caching was faster than 3880 overhead handling multi-path operation) ... I could show increased throughput for multi-path load balancing with 3830 controller, but not 3880 (falling back to primary with alternate paths)

posts mentioning getting to play disk engineeer in bldg14 (disk engineering) and bldg15 (disk product test)
https://www.garlic.com/~lynn/subtopic.html#disk

posts mentioning 3033 trying to use 3880 and dealing with extra SM+BUSY and CUEs
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017g.html#64 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2016h.html#50 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016b.html#79 Asynchronous Interrupts
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2013n.html#56 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2011p.html#120 Start Interpretive Execution
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2009r.html#52 360 programs on a z/10
https://www.garlic.com/~lynn/2009q.html#74 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009o.html#17 Broken hardware was Re: Broken Brancher
https://www.garlic.com/~lynn/2008d.html#52 Throwaway cores
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?

posts mentioning 3090 needing more channels paths to offset increase in 3880 channel busy (as well as additional TCM).
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#66 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021.html#60 San Jose bldg 50 and 3380 manufacturing

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 30 July, 2023
Blog: Facebook
original 3380 had equivalent 20 track spacings between each data track, that was then cut in half to double number of tracks (& cylinders) for 3380E, and spacing cut again to triple number of tracks (& cylinders) for 3380K ... still same 3mbyte/sec channels. other trivia, the (IBM) father of RISC computer asks me to help with his "wide-head" idea ... that handles 18 closely spaced tracks ... surface formatted with 16 data tracks plus servo track ... the "wide-head" would follow two servo-tracks on each side of the 16 data tracks transferring data at 3mbytes/sec from each track, 48mbytes/sec aggregate. Problem was IBM mainframe I/O wouldn't support 48mbyte/sec I/O ... anymore than they would support 48mbyte/sec RAID I/O (although various supercomputers supported HIPPI (sponsored by LANL) ... driven by disk array technology
https://en.wikipedia.org/wiki/HIPPI
... standardized version of the Cray (800mbit/sec) 100mbyte/sec channel

after transferring to San Jose Research, I got to wander around IBM and customer datacenters in silicon valley, including bldg14 (disk engineering) and bldg115 (disk product test) across the street. They were running stand-alone, prescheduled mainframe testing and had said that they had tried MVS, but it had 15min MTBF (in that environment), requiring manual re-ipl. I offer to rewrite I/O supervisor to make it bullet-proof, allowing any amount of concurrent testing, greatly improving productivity. Downside was being sucked into increasingly ask to play disk engineer. When bldg15 got (#3 or #4?) engineering 3033 for disk I/O testing, the testing only took percent or two of 3033 CPU. A 3830 controller and 3330 string was scrounged up and we put in our own private online service. At the time, the air-bearing simulation program was being run on the SJR 370/195 but only getting a few turn-arounds a month. We set things up so it could run on the 3033 ... only about half the processing power of 195, still could get several turn-arounds a day. This was part of design for thin-film, floating heads (flew much closer to surface, increasing data rate and allowing much closer track spacing), originally used in 3370 FBA and then 3380.
https://en.wikipedia.org/wiki/Disk_read-and-write_head

One of the disk engineers was involved in RAID
https://en.wikipedia.org/wiki/RAID

In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4.

... snip ...

In 1980, STL was bursting at the seems and I was sucked into doing channel-extender support. They were transferring 300 people from IMS group to offsite bldg, with dataprocessing support back in STL datacenter. They had tried "remote 3270", but found the human factors totally unacceptable. Channel-extender allowed channel attached 3270 controllers to be placed in the off-site bldg, with no perceived difference in human factors. The hardware vendor tried to get IBM to release my support (also used by IBM Boulder installation), but group of engineers in POK were working on some serial stuff and were concerned if my stuff was in the market, it would make it harder to get their stuff released and got it vetoed.

Also starting in the early 80s, I had HSDT project (T1 and faster computer links, both terrestrial and satellite) and was working with the director of NSF, was suppose to get $20M to interconnect the NSF supercomputer centers (but then congress cuts the budget). One of the first long-haul T1 links was between IBM Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
IBM E&S lab in IBM Kingston that had a boat load of Floating Point System boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems
that had 40mbyte/sec disk arrays.

Then in 1988, the local IBM office asked if I could help LLNL get some serial stuff they were playing with standardized, which quickly becomes Fibre Channel Standard (FCS, including some stuff I had done in 1980) ... initially full-duplex, 1gbit/sec, 2gbit/sec aggregate.
https://en.wikipedia.org/wiki/Fibre_Channel

Note: back in 1980, it was found that moving 3270 controllers to offsite bldg, increased system throughput by 10-15%, STL had spread its 3270 controllers across all the available system channels with disks. It turns out that the channel-extender had significantly decreased the channel busy time for doing the same amount of 3270 I/O (improving disk I/O throughput). STL was even considering using channel-extender for the local 3270 controllers inside STL (for all their systems).

Slightly earlier (late 70s), one Monday morning I get an irate call from bldg15 asking what I had done to the 3033 over the weekend, system throughput and interactive response had severely degraded. It eventually turns out that somebody had replaced the (online service) 3830 controller with engineering test 3880 controller. Turns out they had been trying to mask how slow the 3880 microprocessor was ... and taking much longer doing lots of house keeping operations (compared to 3830). They were presenting end of operation early after data transfer completed, hoping to finish up internal overhead overlapped with system interrupt handling before the next I/O was started. Testing on single thread testing worked, but multiple concurrent tasks would have pending queued I/O to start (not having to wait for application delays generating next I/O request). As a result a SIOF was being done before the 3880 was ready, SIOF would get CC=1/SM+BUSY (controller busy), the request would have to be requeued and wait for CUE interrupt. Before 3880 actually ships, they still do the early interrupt (but not reflecting CC=1/SM+BUSY, while 3880 was busy doing its house keeping overhead, just faking they were already starting the next operation).

Another 3880 gimmick was they started caching the house keeping info from the previous I/O ... cutting overhead when consecutive I/Os were from the same channel path. I had done some optimized multi-channel path load balancing getting higher throughput from 3830 ... however it didn't work for 3880, making throughput worse ... needing to drop back to primary/alternate for 3880 (to take advantage of consecutive I/O on same path).

The aggregate increase in 3880 channel busy (compared to 3830) shows up for 3090. 3090 had configured number of channels to achieve target system throughput based on assumption that 3880 was same as 3830 (but with support for 3mbyte/sec transfer). When they find out how bad 3880 was, they realize they would have to significantly increase number of channels to achieve target throughput. That increase required another TCM, and 3090 was semi-facetious claim they would bill the 3880 group for the increase in 3090 manufacturing cost. IBM marketing eventually respin the increase in 3090 channels as it being a wonderful I/O machine (rather than needed to offset the significant increase in 3880 channel busy).

In late 80s, also had HA/CMP project.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

It started out HA/6000 for NYTimes to move their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CP when start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Informix, Ingres, Sybase). Was also working with Hursley 9333 disks ... was 80mibt serial copper supporting packetized SCSI commands. I was hoping could get it morphed into interoperable with FCS. After cluster scale-up is transferred for announce as IBM supercompuuter and told we couldn't work on anything with more than four processor, we leave IBM a few months later. Then 9333 morphs into SSA
https://en.wikipedia.org/wiki/Serial_Storage_Architecture

The POK engineers finally get their stuff released with ES/9000 in the 90s, as ESCON (when it is already obsolete, 17mbytes/sec).

Then POK becomes involved with FCS and define a heavy-weight protocol that drastically reduces throughput, that is eventually released as FICON
https://en.wikipedia.org/wiki/Fibre_Channel

There are various upper-level protocols for Fibre Channel, including two for block storage. Fibre Channel Protocol (FCP) is a protocol that transports SCSI commands over Fibre Channel networks.[3][4] FICON is a protocol that transports ESCON commands, used by IBM mainframe computers, over Fibre Channel. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands.

... snip ...

Latest public benchmark I've found is z196 "Peak I/O" that gets 2M IOPS using 104 FICON. About the same time, a FCS was announced for E5-2600 blade claiming over million IOPS (two such FCS have higher throughput than 104 FICON running over 104 FCS).

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd
HSDT network posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
Fibre Channel Standard & FICON
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 30 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia

Also after transferring to SJR, I was involved in working with Jim Gray and Vera Watson on the original SQL/relational System/R implementation (originally being done on VM/145). Then there was technology transfer ("under the radar" while the company was preoccupied with the next/great DBMS, "EAGLE") to Endicott for SQL/DS. Then when "EAGLE" implodes, there were requests how fast could System/R be ported to MVS ... eventually announced as DB2 (for decision-support *ONLY*).

I had offered MVS group FBA support, but was told that even if I provided fully integrated and tested support, I still had to show a couple hundred million in incremental disk sales to cover the $26M for new documentation and training ... but ... since IBM was already selling every disk it could make, MVS FBA support wouldn't change disk sales (also I couldn't use life time savings in the business case).

As a result, IBM had to increasingly do CKD emulation on increasingly fixed-block technology (can be seen in 3380 records/track calculation where record sizes have to be rounded up to its 32byte fixed "cell" size). Note: ECKD channel programming was originally for 3880 speed matching buffer ("calypso") allowing 3mbyte/sec 3380s to work with 370 1.5mbyte channels.

Note in the early 80s, large corporations were ordering hundreds of VM/4300s for distributed computing placing out in departmental areas (sort of leading edge of coming distributed computing tsunami). MVS wanted to play, but new CKD DASD was datacenter 3380s, only mid-range, non-datacenter DASD were FBA. Eventually got CKD emulation on 3370 FBA as "3375". However, it didn't do MVS much good. Distributed computing was deploying large number of systems per support person ... while MVS still required multiple support people per system.

Note: During the Future System period in the 1st half of 70s (completely different and was going to completely replace 370), internal politics was killing off 370 efforts. When FS finally implodes, there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 303x&3081 efforts in parallel. The head of (high-end mainframe) POK also convinced corporate to kill the virtual machine product, shutdown the development group (out in Burlington mall off Rt.128) and transfer all the people to POK for MVS/XA (claiming otherwise MVS/XA wouldn't ship on time). Endicott eventually managed to acquire the VM/370 product mission, but had to reconstitute a development group from scratch.

Also, POK wasn't going to tell the VM370 group of the shutdown/move until last minute, to minimize the number that could escape ... however it leaked early and several managed to escape (including to the infant DEC VAX/VMS effort, joke was head of POK was major contributor to VAX/VMS). Then there was hunt for the leak, fortunately for me, nobody gave the source up.

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
CKD, FBA, multi-track search, etc DASD posts
https://www.garlic.com/~lynn/submain.html#dasd

a few posts mentioning MVS FBA and calypso/ECKD
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2013.html#40 Searching for storage (DASD) alternatives
https://www.garlic.com/~lynn/2012j.html#12 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2009e.html#61 "A foolish consistency" or "3390 cyl/track architecture"
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers

some related linkedin posts
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
https://www.linkedin.com/pulse/dasd-channel-io-long-winded-trivia-lynn-wheeler

some other posts mentioning the linkedin posts
https://www.garlic.com/~lynn/2023d.html#100 IBM 3083
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023c.html#92 TCP/IP, Internet, Ethernet, 3Tier
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023b.html#32 Bimodal Distribution
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023b.html#12 Open Software Foundation
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#59 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 31 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia

At end of semester taking 2 credit hr intro to fortran/computers, I was hired to re-implement 1401 MPIO for 360/30. The univ. had been sold 360/67 for tss/360 to replace 709/1401. Temporarily the 1401 was replaced with 360/30 to get 360 experience (360/30 had 1401 and could continue to run 1401 MPIO, I guess my job was part of gaining experience). The univ. shutdown the datacenter on weekends and I would have the place dedicated to myself (although 48hrs w/o sleeep made monday classes hard). I was given a bunch of software&hardware manuals and got to design, implement, test my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Within a few weeks, I had a 2000 card assembler program. Then within a year of taking intro class, the 360/67 had arrive and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition). Trivia: student fortran ran under second on 709. Initially on 360/65 os/360, ran over minute. Installing HASP cut that time in half. I then start revamp of STAGE2 SYSGEN, being able to run in production job stream (instead of starter system) and reorging statements to carefully place datasets and PDS members to optimize arm seek and PDS director multi-track search, cutting time another 2/3rds to 12.9secs. Sometimes heavy PTF activity destroying PDS member order would slowly increase time to 20+secs, at which time I would be forced to do intermediary sysgen to get the performance back.

Then some from CSC come by to install CP67 (3rd installation after CSC itself and MIT Lincoln Labs) ... which I mostly played with (and rewrote) during my weekend window. Part of old SHARE presentation about OS360 MFT14 performance as well as running OS360 in CP67 virtual machine.
https://www.garlic.com/~lynn/94.html#18
OS360 test ran in 322secs, initially under CP/67 ran in 856sec (CP/67 CPU 534secs). After a few months, I've reduce that to 435sec (CP/67 CPU 113secs).

I then do dynamic adaptive resource management ("wheeler" scheduler) and new page replacement algorithm. Original CP67 did DASD FIFO I/O and page transfer were single 4k transfer/IO. I implement ordered seek queuing (increases disk throughput and graceful degradation as load increase) and chaining multiple page transfers per I/O (all queued for same disk arm). For 2301 drum increases throughput from approx. 75/sec to 270/sec peak.

Before graduating, I'm then hired into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton datacenter possibly largest in he world, couple hundred million in 360s; 360/65s arriving faster than could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and CFO, who only had 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). There is disaster scenario plan to replicate Renton at the new 747 plant in Everett ... where Mt. Rainier heats up and the resulting mud slide takes out Renton datacenter. When I graduate, I join IBM CSC, instead of staying at Boeing.

At CSC, one of my hobbies is enhanced production operating systems for internal datacenters, including the world-wide, online sales&marketing support HONE system was long time customer. With the decision to add virtual memory to all 370s, some of the CSC people split off and take over the IBM Boston Programming Center on the 3rd flr to do VM370. In the morph of CP67 to VM370 some stuff was dropped and/or greatly simplified. I spend some of 1974 putting stuff back in. I had automated benchmarking system that with synthetic workload where specify combinations of interactive, compute intensive, filesystem intensive, paging intensive, etc. I initially move that to VM370 ... but it consistently is crashing VM370. The next thing have to move is the CP67 kernel serialization mechanism in order to successfully get through benchmarking. By early 1975, I have "CSC/VM" ready for internal distribution ... some old email:
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

Endicott cons me into helping work on ECPS microcode assist for 138/148 ... old archived post showing initial analysis of what parts of the kernel would be moved to microcode
https://www.garlic.com/~lynn/94.html#21
... and by the 370/125 group to support 125 multiprocessor running five shared memory (tightly-coupled) processors. Endicott is then concerned the 5-way 125 would overlap 148 throughput and in the escalation meetings, I'm required to argue both sides. Endicott gets the 5-way shutdown.

US HONE datacenters had been consolidated in silicon valley (trivia: when facebook 1st moves into silicon valley, it is new bldg built next door to the former US HONE datacenter) and has been enhanced to max share DASD configured "single system image", loosely-coupled operation (with load balancing and fall-over across the complex) ... I consider largest in the world. I then add 2-way shared memory, tightly coupled multiprocessor support, so they can add 2nd processor to each system (16 processors).

I then get con'ed into helping with a standard 16-way tightly-coupled 370 and the 3033 processor engineers are talked into helping in their spare time (lot more interesting than remapping 168-3 logic to 20% faster chips). Everything is going great guns until somebody tells the head of POK that it could be decades before the POK favorite son operating system (MVS) has effective 16-way support. Some of us are then invited to never visit POK again ... and the 3033 processor engineers are told "no distractions". POK doesn't ship a 16-way machine until after the turn of the century. Note: once the 3033 is out the door, the processor engineers start on trout/3090.

"wheeler scheduler" posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
page replacement algorithm posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some posts mentioning 709/1401, MPIO, WATFOR, Boeing CFO
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 31 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia

re: history posts, mar/apr 2005 IBM System mag article (gone 404, lives on at wayback machine): "Unofficial historian's dedication to the industry still thrives"
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/

about my posts, some details in article a little garbled (at the time most posts to usenet, alt.foklore.computer, comp.arch, bit.listserv.ibm-main); archived to my web pages
https://www.garlic.com/~lynn/

there is also presentation I gave at '86 SEAS meeting and repeated at 2009 HillGang meeting
https://www.garlic.com/~lynn/hill0316g.pdf

past posts mentioning history article
https://www.garlic.com/~lynn/2022h.html#61 Retirement
https://www.garlic.com/~lynn/2022e.html#17 VM Workshop
https://www.garlic.com/~lynn/2022c.html#40 After IBM
https://www.garlic.com/~lynn/2022.html#75 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021j.html#59 Order of Knights VM
https://www.garlic.com/~lynn/2021h.html#105 Mainframe Hall of Fame
https://www.garlic.com/~lynn/2021e.html#24 IBM Internal Network
https://www.garlic.com/~lynn/2019b.html#4 Oct1986 IBM user group SEAS history presentation
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2017g.html#8 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#105 The IBM 7094 and CTSS
https://www.garlic.com/~lynn/2016c.html#61 Can commodity hardware actually emulate the power of a mainframe?
https://www.garlic.com/~lynn/2016c.html#25 Globalization Worker Negotiation
https://www.garlic.com/~lynn/2015g.html#80 Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?
https://www.garlic.com/~lynn/2014d.html#42 Computer museums
https://www.garlic.com/~lynn/2013l.html#60 Retirement Heist
https://www.garlic.com/~lynn/2013k.html#29 The agency problem and how to create a criminogenic environment
https://www.garlic.com/~lynn/2013k.html#28 Flag bloat
https://www.garlic.com/~lynn/2013k.html#2 IBM Relevancy in the IT World
https://www.garlic.com/~lynn/2013h.html#87 IBM going ahead with more U.S. job cuts today
https://www.garlic.com/~lynn/2013h.html#77 IBM going ahead with more U.S. job cuts today
https://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013f.html#49 As an IBM'er just like the Marines only a few good men and women make the cut,
https://www.garlic.com/~lynn/2013e.html#79 As an IBM'er just like the Marines only a few good men and women make the cut,
https://www.garlic.com/~lynn/2013.html#74 mainframe "selling" points
https://www.garlic.com/~lynn/2012p.html#60 Today in TIME Tech History: Piston-less Power (1959), IBM's Decline (1992), TiVo (1998) and More
https://www.garlic.com/~lynn/2012o.html#32 Does the IBM System z Mainframe rely on Obscurity or is it Security by Design?
https://www.garlic.com/~lynn/2012k.html#34 History--punched card transmission over telegraph lines
https://www.garlic.com/~lynn/2012g.html#87 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#82 How do you feel about the fact that today India has more IBM employees than US?
https://www.garlic.com/~lynn/2012.html#57 The Myth of Work-Life Balance
https://www.garlic.com/~lynn/2011p.html#12 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011c.html#68 IBM and the Computer Revolution
https://www.garlic.com/~lynn/2010q.html#60 I actually miss working at IBM
https://www.garlic.com/~lynn/2010q.html#30 IBM Historic computing
https://www.garlic.com/~lynn/2010o.html#62 They always think we don't understand
https://www.garlic.com/~lynn/2010l.html#36 Great things happened in 1973
https://www.garlic.com/~lynn/2008p.html#53 Query: Mainframers look forward and back
https://www.garlic.com/~lynn/2008j.html#28 We're losing the battle
https://www.garlic.com/~lynn/2008b.html#66 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2008b.html#65 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2006q.html#26 garlic.com
https://www.garlic.com/~lynn/2006i.html#11 Google is full
https://www.garlic.com/~lynn/2006c.html#43 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005h.html#19 Blowing My Own Horn
https://www.garlic.com/~lynn/2005e.html#14 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005e.html#9 Making History

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 31 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia

SNA, DASD, 801/RISC stories; late 70s there was effort to make all microprocessors "801/RISC" ("Iliad" chips for entry&mid range 370 processors, controllers, AS/400, etc). For various reasons they mostly floundered except for the ROMP 801/RISC for Displaywriter follow-on (person that retrofitted 370/xa access registers to 3033 as dual-address space mode also worked on 801/risc for 4300s, then left for HP & their "snake" risc). When Displaywriter follow-on was canceled, they decided to retarget ROMP for the UNIX workstation market, getting the company that had done PC/IX to do similar port of AT&T Unix to ROMP ... which eventually ships as PC/RT and "AIX". The follow-on was RIOS for RS/6000 with microchannel. However, the communication group was fiercely fighting off client/server and distributed computing and had gotten all microchannel cards heavily performance "kneecapped". AWD had done their own 4mbit token-ring card for the PC/RT (16bit PC/AT bus). However, AWD was restricted to just using the kneecapped PS2 microchannel cards. It turned out that the PC/RT 4mbit token-ring card had higher (per-card) throughput than the microchannel 16mbit token-ring card (a PC/RT 4mbit T/R server could have higher throughput than RS/6000 16mbit T/R server)

Late 80s, a senior disk engineer got a talk scheduled at an internal, world-wide, annual communication group conference supposedly on 3174 performance, but open the talk with the statement that the communication group was going to be responsible for the demise of the disk division (with their stranglehold on mainframe datacenters). The disk division was seeing data fleeing to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions, but they were all vetoed by the communication group with their corporate strategic ownership of everything crossing the datacenter walls.

One of the disk division executives had found a partial work-around, investing in distributed computing startups that would use IBM disks ... and he would periodically ask us to drop by his investments to see if could offer any help.

NOTE: It wasn't just mainframe DASD, but all mainframe business. A couple yrs later IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.

801/RISC, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, Power/pc, etc. posts
https://www.garlic.com/~lynn/subtopic.html#801
communication group stranglehold on mainframe datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 31 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia

I had office in SJR/28 and part of a wing out in Los Gatos lab/bldg29 ... but wandered around bldg14&15, STL/SVL, bunch of other places. Then I was blamed for online computer conferencing (precursor to IBM forums and modern social media) on the IBM internal corporate network. folklore is when the corporate executive committee was told about online computer conferencing and the internal corporate network, 5of6 wanted to fire me. I was then transferred to Yorktown, but left to live in San Jose with SJR office and the Los Gatos wing (then got office in Almaden when research moved up the hill). I did have to commute to Yorktown a couple times a month. misc. details
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
onliine computer conferencing
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal corporate network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

APL

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: APL
Date: 31 July, 2023
Blog: Facebook
Standard SE training use to include sort of journeyman as part of large SE group on customer premises. Then the 23Jun1969 Unbundling announcement started to charge for (application) software (made the case kernel software should still be free), maint, and SE services. They couldn't figure out how not to charge for trainee SEs at customer site. This was somewhat the original birth of HONE ... 360/67 datacenters with CP/67 online access from branch offices so trainee SE could practice their operating system skills running with virtual machines.

The cambridge science center also ports APL\360 to CMS as CMS\APL (much of the work was redoing APL\360 storage management from 16kbyte swapped workspaces to large virtual memory, demand paged workspaces, along with adding API for system services; enabling lots of real-world applications) and HONE starts offering CMS\APL-based sales&marketing support applications. The APL-based sales&market support application soon come to dominate all HONE activity (and the SE training virtual machine use evaporates)

The HONE datacenters migrated from CP/67 to VM/370 and all the US HONE datacenters are consolidated in silicon valley ... across the back parking lot from the Palo Alto Science Center ... PASC has upgraded CMS\APL to APL\CMS for VM/370 (then APL\SV and VS\APL); PASC also did the 370/145 APL-assist microcode and the original 5100 work. I considered HONE APL-based sales&marketing support tools possibly the largest use of APL in the world (especially as HONE clones spread around the world).

trivia: when FACEBOOK 1st moves into silicon valley, it is a new bldg built next to the former US HONE datacenter. note: After I graduate and joined IBM Cambridge Science Center, one of hobbies was enhanced production operating systems for internal datacenters and HONE was long time customer.

unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

3380 Capacity compared to 1TB micro-SD

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 3380 Capacity compared to 1TB micro-SD
Date: 01 Aug, 2023
Blog: Facebook
3380 got more dense. original 3380 had 20 track spacings between each data track (630MB, two hard disk assembles each with two actuators for 2.52GB), then the spacing was cut in half for double tracks (& cylinders) for 3380E (5.04GB), then cut again for triple tracks (& cylinders) for 3380K (7.56GB).
https://www.ibm.com/ibm/history/exhibits/storage/storage_3380.html

above mentions 3880(/calypso) "speed matching buffer" to allow 3380 attaching to 370 channels (also inception of ECKD). Note 158-3 integrated channel microcode was one of the slowest channels; for 303x channel director, they used 158 engine, w/o 370 microcode and just the integrated channel microcode. A 3031 was two 158 engines, one with just 370 microcode and 2nd with just the integrated channel microcode.

FE had created 57 simulated errors they believed to likely occur and in early testing (before customer ship), MVS was failing in all 57 cases (requiring manual re-ipl) and in 2/3rds of the case no indication of what caused the failure (joke that MVS "recovery" was actually repeatedly covering up the original cause before giving up).

a capacity mapping program was done for migrating 3330->3380. when looking at disk dependent system throughput, it calculated avg "arm access per megabyte" which required that 3380s only be loaded to 80% of capacity. At SHARE meetings there was observation that IT accountants, managers, executives wouldn't tolerate empty space (even if it reduced overall system throughput and increased cost/throughput). There were jokes of getting IBM to market a "high throughput" 3380 at a higher price (even if it was just a microcode load) to salve the feelings of the IT overhead personnel. More realistically recommendations was use the reserved space for extremely low use data.

Later there was a "fast" 3380J marketed ... that was a 3380K that only used 1/3rd the number of cylinders (same capacity as original 3380, but disk arm travel was only 1/3rd as far). More long winded in recent post "DASD, Channel and I/O long winded trivia"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#109 DASD, Channel and I/O long winded trivia

DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

some other specific recent posts mentioning 3380 disks & 3880 controller
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#61 mainframe bus wars, How much space did the 68000 registers take up?
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:

--
virtualization experience starting Jan1968, online at home since Mar1970

3380 Capacity compared to 1TB micro-SD

From: Lynn Wheeler <lynn@garlic.com>
Subject: 3380 Capacity compared to 1TB micro-SD
Date: 01 Aug, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD

I've periodically mentioned that in the late 80s, a senior disk engineer got a talk scheduled at the internal, annual, world-wide communication group conference supposedly on 3174 performance ... but opened the talk that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms with drop in disk sales. The disk division had come up with a number of solutions to address the problem ... however they were constantly vetoed by the communication group. The issue was the communication group was fiercely fighting off client/server and distributed computing and had stranglehold on mainframe datacenters with their corporate responsibility for everything that crossed the datacenter walls. Communication group datacenter stranglehold wasn't just disks ... but the whole mainframe market and a couple years later, IBM has one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.

In the period, a IBM disk division executive had a partial solution investing in distributed computing startups that would use IBM disks and would periodically ask us to visit his investments and see if we could offer any assistance.

note ... 13 "baby blues" reorg reference gone 404
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk (corp hdqtrs) asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts. Before we get started, a new CEO is brought in and reverses the breakup.

communication group posts, include datacenter stranglehold
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

IBM DASD, CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

VM370

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: VM370
Date: 02 Aug, 2023
Blog: Facebook
One of my hobbies after graduating and joining IBM was enhanced production operating systems for internal datacenters. Then after the decision to add virtual memory to all 370s, there was decision to turn out VM370 (in the morph of CP67->VM370, they simplified and/or dropped lots of stuff, including multiprocessor support). A decade+ ago, I was asked if I could track down decision to make all 370s, virtual memory. Basically, MVT storage management was so bad that regions had to be specified four times larger than used, as a result a typicaly 1mbyte 370/165 only ran four concurrent regions, insufficient to keep the system busy (and justified). Mapping MVT into 16mbyte virtual memory (similar to running MVT in a CP67 16mbyte virtual machine), allowed number of concurrent running regions to be increased by a factor of four times with little or no paging. Old archived email with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

The Future System effort was started about the same time (completely different and going to completely replace 370; during FS, 370 efforts were being shutdown, lack of new 370 during the period is credited with giving clone 370 makers their market foothold). In 1974, I started on migrating from CP67->VM370 (I continued to work on 370 all during the FS period, even periodically ridiculing them when they came by to talk). Some FS detail
http://www.jfsowa.com/computer/memo125.htm

I had done automated benchmarking for tracking CP67 work ... so was 1st thing I migrated to VM370 ... however even modest load would consistently crash VM370 ... so next thing was migrate the CP67 kernel serialization mechanism (to eliminate all the crashes). By start of 1975, had my CSC/VM ready for internal distribution ... some old archived email (note the online, world-wide, sales&marketing support HONE system was one of my long time customers)
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

When FS finally implodes, there is mad rush to get stuff back into 370 product pipelines, including kicking off the quick&dirty 303x&3081 efforts in parallel. Also the head of (high-end 370) POK convinces corporate to kiil the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (otherwise, supposedly MVS/XA wouldn't be able to ship on time). Endicott eventually manages to save the VM370 product mission, but had to reconstitute a development group from scratch.

Also decision was made to start charging for kernel software. Note in the 23jun1969 "unbundling" announcement, it included charging for (application) software (but managed to make the case that kernel software was still free). Possibly with the rise of the 370 clone makers, the decision was changed ... starting with charging for all new kernel "add-ons" ... eventually transitioning to all kernel software was charged for by the early 80s. Then came the OCO-wars (object code only) in the 80s, where source code would no longer be distributed.

Some of my (internal) stuff was tapped for charged-for kernel add-on guinea pig and I spent some amount of time with lawyers and business people on charged-for kernel software policies. Supposedly, it was just the "wheeler scheduler" (dynamic adaptive resource resource management, but that was only about 10%, I included a bunch of other stuff, including reorganization of kernel structure for multiprocessor support ... but not the actual multiprocessor support) for VM370 Release 3. At the time, the policy was kernel hardware support software would still be free *AND* free kernel support software could have no requirement on charged-for support). When it was decided to ship multiprocessor support in VM370 Release 4, they faced a dilemma; it required the kernel reorganization that was in my priced "scheduler". The resolution was to move 90% of the code in the charged-for addon scheduler into the "free" kernel (w/o changing the price charged).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
"wheeler scheduler", dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark
SMP, multiprocessor, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 03 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#109 DASD, Channel and I/O long winded trivia

I had also wrote test for channel/controller speed. VM370 had a 3330/3350/2305 page format that had "dummy short block" between 4k page blocks. Standard channel program had seek followed by search record, read/write 4k ... possibly chained to search, read/write 4k (trying to maximize number of 4k transfers in single rotation). For records on the same cylinder, but different track, in the same rotation, had to add a seek track. The channel and/or controller time to process the embedded seek could exceed the rotation time for the dummy block ... causing an additional revolution.

Test would format cylinder with maximum possible dummy block size (between page records) and then start reducing to minimum 50byte size ... checking to see if additional rotation was required. 158 (also 303x channel director and 3081) had the slowest channel program processing. I also got several customers with non-IBM processors, channels, and disk controllers to run the tests ... so had combinations of IBM and non-IBM 370s with IBM and non-IBM disk controller.

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

some past posts mentioning dummy block/record
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2010e.html#36 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2005s.html#22 MVCIN instruction

I've mentioned before that after the Future System imploded, there was mad rush to get stuff back into the 370 product pipelines kicking off quick and dirty 303x & 3081 efforts in parallel For the 303x channel director, they took the 158 engine with the integrated channel microcode (and w/o the 370 microcode). A 3031 was two 158 engines, one with just the 370 microcode and one with just the integrated channel microcode. A 3032 was 168-3 reworked to use channel director for external channels. A 3033 started out with 168-3 logic remapped to 20% faster chips.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

ADVENTURE

From: Lynn Wheeler <lynn@garlic.com>
Subject: ADVENTURE
Date: 05 Aug, 2023
Blog: Facebook
After transferring to San Jose Research I got to wander around lots of IBM and customer locations in Silicon Valley. I would drop by TYMSHARE periodically and/or see them at the monthly BAYBUNCH meetings at SLAC
https://en.wikipedia.org/wiki/Tymshare
they then made their CMS-based online computer conferencing system available to the (mainframe user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
for free in Aug1976 ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get a monthly tape dump off all VMSHARE files for putting up on the internal network and systems (biggest problem was IBM lawyers concerned internal employees would be contaminated with information about customers).

On one visit they demo'ed ADVENTURE game. Somebody had found it on Stanford SAIL PDP10 system and ported to VM/CMS system. I got copy and made it available internally (would send source to anybody that could show they got all the points). Shortly versions with more points appeared and a port to PLI.
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure

trivia: they told a story about TYMSHARE executive finding out people were playing games on their systems ... and directed TYMSHARE was for business use and all games had to be removed. He quickly changed his mind when told that game playing had increased to 30% of TYMSHARE revenue.

Most IBM internal systems had "For Business Purposes Only" on the 3270 VM370 login screen; however, IBM San Jose Research had "For Management Approved Uses Only". It played a role when corporate audit said all games had to be removed and we refused.

posts mentioning TYMSHARE, vmshare, adventure
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2017h.html#11 The original Adventure / Adventureland game?
https://www.garlic.com/~lynn/2017f.html#67 Explore the groundbreaking Colossal Cave Adventure, 41 years on
https://www.garlic.com/~lynn/2017d.html#100 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure

--
virtualization experience starting Jan1968, online at home since Mar1970

APL

From: Lynn Wheeler <lynn@garlic.com>
Subject: APL
Date: 05 Aug, 2023
Blog: Facebook
ref:
https://www.garlic.com/~lynn/2023d.html#110 APL

Somebody in Canada was doing book about Edson, contacted me ... and I provided her with some pictures and stories (after Edson left IBM, I hired him on for HSDT & HA/CMP)
https://en.wikipedia.org/wiki/Edson_Hendricks
https://www.amazon.com/Its-Cool-Be-Clever-Hendricks/dp/1897435630/

She mentioned that she had talked to you. I had also mentioned that I had done some analysis of F35 which she wanted a copy to send to somebody because Canada was considering it. (turns our some high level politician, possibly prime minister), but they've recently announced they might be getting F35 anyway. I had been introduced to Boyd in the early 80s, and he had helped Sprey with design of A10 ... and at the time Sprey was also being very critical of 35. Some Boyd reference
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
HTML posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning Boyd, Sprey, F35, & A10
https://www.garlic.com/~lynn/2022f.html#9 China VSLI Foundry
https://www.garlic.com/~lynn/2021i.html#88 IBM Downturn
https://www.garlic.com/~lynn/2016.html#57 Shout out to Grace Hopper (State of the Union)
https://www.garlic.com/~lynn/2015.html#10 NYT on Sony hacking

other posts with some F35 analysis
https://www.garlic.com/~lynn/2022h.html#81 Air Force unveils B-21 stealth plane. It's not a boondoggle, for a change
https://www.garlic.com/~lynn/2022e.html#101 The US's best stealth jets are pretty easy to spot on radar, but that doesn't make it any easier to stop them
https://www.garlic.com/~lynn/2019e.html#53 Stealthy no more? A German radar vendor says it tracked the F-35 jet in 2018 -- from a pony farm
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2018f.html#83 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018c.html#109 JSF/F-35
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#63 The F-35 has a basic flaw that means an F-22 hybrid could outclass it -- and that's a big problem
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018c.html#19 How China's New Stealth Fighter Could Soon Surpass the US F-22 Raptor
https://www.garlic.com/~lynn/2018c.html#14 Air Force Risks Losing Third of F-35s If Upkeep Costs Aren't Cut
https://www.garlic.com/~lynn/2018b.html#86 Lawmakers to Military: Don't Buy Another 'Money Pit' Like F-35
https://www.garlic.com/~lynn/2017g.html#44 F-35
https://www.garlic.com/~lynn/2017c.html#15 China's claim it has 'quantum' radar may leave $17 billion F-35 naked
https://www.garlic.com/~lynn/2016h.html#93 F35 Program
https://www.garlic.com/~lynn/2016h.html#77 Test Pilot Admits the F-35 Can't Dogfight
https://www.garlic.com/~lynn/2016e.html#104 E.R. Burroughs
https://www.garlic.com/~lynn/2016e.html#61 5th generation stealth, thermal, radar signature
https://www.garlic.com/~lynn/2016b.html#96 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#89 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#55 How to Kill the F-35 Stealth Fighter; It all comes down to radar ... and a big enough missile
https://www.garlic.com/~lynn/2016b.html#20 DEC and The Americans
https://www.garlic.com/~lynn/2015f.html#46 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015f.html#44 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2015c.html#14 With the U.S. F-35 Grounded, Putin's New Jet Beats Us Hands-Down
https://www.garlic.com/~lynn/2015b.html#59 A-10
https://www.garlic.com/~lynn/2014j.html#43 Let's Face It--It's the Cyber Era and We're Cyber Dumb
https://www.garlic.com/~lynn/2014j.html#41 50th/60th anniversary of SABRE--real-time airline reservations computer system
https://www.garlic.com/~lynn/2014j.html#40 China's Fifth-Generation Fighter Could Be A Game Changer In An Increasingly Tense East Asia
https://www.garlic.com/~lynn/2014i.html#102 A-10 Warthog No Longer Suitable for Middle East Combat, Air Force Leader Says
https://www.garlic.com/~lynn/2014h.html#49 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014h.html#36 The Designer Of The F-15 Explains Just How Stupid The F-35 Is
https://www.garlic.com/~lynn/2013o.html#40 ELP weighs in on the software issue:

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD, Channel and I/O long winded trivia

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD, Channel and I/O long winded trivia
Date: 06 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#107 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#109 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#114 DASD, Channel and I/O long winded trivia

Early/mid 80s, majority of IBM revenue from mainframe hardware. Turn of century financials were mainframe hardware was only a few percent of IBM revenue. Around 2012, analysis was mainframe hardware was only a couple percent of IBM revenue (and still dropping), but mainframe group was 25% of IBM revenue (and 40% of profit) ... nearly all software and services.

... NOTE: no CKD disks have been manufactured for decades, all CKD is simulated on industry standard fixed-block disks. trivia ECKD CCWs started out for 3880 (disk controller) speed matching feature ("calypso") for attaching 3380 3mbyte/sec disks to 370 channels. From long ago and far away:

Date: 09/07/82 12:16:54
From: wheeler

STL cannot even handle what they currently have. Calypso (3880 speed matching buffer using "ECKD") is currently in the field and not working correctly. Several severity one situations. Engineers are on site at some locations trying to solve some hardware problems ... but they have commented that the software support for ECKD appears to be in even worse shape ... didn't even look like it had been tested.


... snip ... top of post, old email index

rest of the email:
https://www.garlic.com/~lynn/2007e.html#email820907b
and earlier email mentioning Calypso
https://www.garlic.com/~lynn/2015f.html#email820111

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

specific past posts mentioning calypso and/or mainframe hardware couple percent of IBM revenue but mainframe group 25% of revenue and 40% of profit (nearly all software and services)
https://www.garlic.com/~lynn/2023d.html#111 3380 Capacity compared to 1TB micro-SD
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#103 IBM Term "DASD"
https://www.garlic.com/~lynn/2023c.html#5 IBM Downfall
https://www.garlic.com/~lynn/2022h.html#45 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#68 Security Chips and Chip Fabs
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#71 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#35 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022d.html#70 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#5 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#98 IBM Systems Revenue Put Into a Historical Context
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2022.html#54 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021g.html#18 IBM email migration disaster
https://www.garlic.com/~lynn/2021b.html#3 Will The Cloud Take Down The Mainframe?

--
virtualization experience starting Jan1968, online at home since Mar1970

Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
Date: 06 July, 2023
Blog: Facebook
Some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS, others went to the IBM science center on the 4th flr and did virtual machines (initially cp40/cms on 360/40 with hardware mods for virtual memory, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, later after decision to make all 370s with virtual memory morphs into VM370/CMS), internal network, lots of online and performance work. CTSS RUNOFF redone for CMS as SCRIPT and after GML was invented at science center in 1969, GML tag processing added to SCRIPT (after a decade, GML morphs into ISO standard SGML and after another decade morphs into HTML at CERN).

trivia: first webserver in US was at Stanford SLAC (CERN sister institution) VM370
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SCRIPT, GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml

Co-worker at the science center had done RSCS/VNET
https://en.wikipedia.org/wiki/Edson_Hendricks
https://www.amazon.com/Its-Cool-Be-Clever-Hendricks/dp/1897435630/

was used for most of the internal network and corporate sponsored BITNET. Ed had done clean interfaces for RSCS/VNET and as a result was straight-forward to do driver emulating NJE. JES NJE had serious problems ... starting with original implementation used spare entries in the HASP 255 entry psuedo device table (say 160-180 entries, original JES NJE source still had four letter "TUCC" in cols 68-71 from customer that did the original implementation) at the time when internal network had passed 255 nodes (internal network had more nodes than arpanet/internet from just about the beginning until sometime mid/late 80s). JES NJE would also drop arriving traffic when either the origin and/or the destination node wasn't in its local table. When the number of JES NJE supported nodes was increased to 999, the internal network had already passed 1000.

The worst feature being that because of the intermixing network fields with job control fields and minor release changes in JES NJE format, resulted in traffic between nodes at different release levels would crash JES, bringing down MVS. As a result JES NJE nodes were kept to network boundary behind a RSCS/VNET running a special NJE emulation that understood numerous NJE field formats and would modify format to match JES release level on the other side of a directly connected link (in 1st half of 80s, there is an infamous case of JES release in San Jose crashing MVS system in Hursley, Hursley MVS then blamed Hursley RSCS/VNET support because they hadn't updated the RSCS/VNET NJE emulation to account for the JES NJE format change at San Jose).

Later marketing withdrew the native RSCS/VNET drivers ... just shipping the NJE emulation ... however the internal network kept running native RSCS/VNET drivers because they had much higher throughput than NJE protocol (until required to switch to SNA)

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HASP, JES, NJE, NJI, etc posts
https://www.garlic.com/~lynn/submain.html#hasp

trivia: 1jan1983, arpanet converted from IMP/host protocol to internetworking protocol at a time when there was approx. 100 IMP network nodes and 255 hosts ... at a time when the corporate internal network was rapidly approaching 1000 nodes. Old archived post with a list of corporate locations that added one or more network nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8

One of the big differences in further growth of the internal network and "Internet" was internal network required encrypted links (lots of gov. hassle especially when links cross national boundaries) and communication group (trying to) force all workstation and PCs to use 3270 emulation to host .... while Internet was mostly unencrypted links and big spurt when workstations and PCs got (especially LAN) TCP/IP support

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
communication group fiercely fighting to preserve terminal emulation paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

From: Lynn Wheeler <lynn@garlic.com>
Subject: Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
Date: 06 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#118 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

trivia2: Communication group was fiercely fighting off release of mainframe TCP/IP support ... then possibly some customers got that reversed ... but then communication group revised to since communication group had corporate responsibility for everything crossing datacenter walls, it had to be released through them. What shipped got 44kbyte/sec aggregate throughput using nearly whole 3090 processor (and port to MVS simulating VM370 API, got even more compute intensive). I then did the changes for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).

Mainframe TCP/IP RFC1044 support posts
https://www.garlic.com/~lynn/subnetwork.html#1044

Then the communication group forced the conversion of the internal network to SNA/VTAM (instead of allowing conversion to TCP/IP like BITNET was doing).

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

Later the communication group hired a silicon valley contractor to implement TCP/IP support directly in VTAM. TCP/IP that he initially demo'ed had significantly higher throughput than LU6.2. He was then told that *everybody knows* that a *proper* TCP/IP implementation is much slower than LU6.2 and they would only be paying for a *proper* implementation.

post mentioning TCP/IP & LU6.2 in VTAM
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#70 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#33 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023b.html#56 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022h.html#115 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#94 IBM 360
https://www.garlic.com/~lynn/2022h.html#86 Mainframe TCP/IP
https://www.garlic.com/~lynn/2022h.html#71 The CHRISTMA EXEC network worm - 35 years and counting!
https://www.garlic.com/~lynn/2022g.html#47 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022f.html#12 What is IBM SNA?
https://www.garlic.com/~lynn/2022b.html#80 Channel I/O
https://www.garlic.com/~lynn/2022.html#24 Departmental/distributed 4300s
https://www.garlic.com/~lynn/2021j.html#27 Programming Languages in IBM
https://www.garlic.com/~lynn/2021d.html#13 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021b.html#9 IBM Kneecapping products
https://www.garlic.com/~lynn/2019d.html#8 IBM TCP/IP
https://www.garlic.com/~lynn/2019c.html#35 Transition to cloud computing
https://www.garlic.com/~lynn/2017j.html#33 How DARPA, The Secretive Agency That Invented The Internet, Is Working To Reinvent It
https://www.garlic.com/~lynn/2017i.html#35 IBM Shareholders Need Employee Enthusiasm, Engagemant And Passions
https://www.garlic.com/~lynn/2017d.html#72 more IBM online systems
https://www.garlic.com/~lynn/2013i.html#62 Making mainframe technology hip again
https://www.garlic.com/~lynn/2012i.html#95 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011n.html#63 ARPANET's coming out party: when the Internet first took center stage
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler

--
virtualization experience starting Jan1968, online at home since Mar1970

Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
Date: 06 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#118 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#119 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

Another Hursley anecdote (also 1st part of 80s): STL (Santa Teresa Lab, just couple miles down the road from San Jose plant site, name since changed to SVL) had a plan with Hursley to have a (double-hop) 56kbit satellite link (to use each other's systems offshift). They hooked up the link with RSCS/VNET and everything worked fine. Then an (MVS bigot) executive insisted it be hooked up with JES/NJE and it wouldn't work. They then they switched it back to RSCS/VNET and it resumed working (data flowing fine in both directions). The executive then claimed that RSCS/VNET was too dumb to realize it wasn't working (discounting the fact that data was being sent and received just fine). Turns out JES/NJE (SNA) had link handshake protocol that had a time-out that was shorter than the double-hop satellite round trip transmission (up west coast, down east coast, up east coast, down England ... and back).

VTAM/SNA had a separate problem with its window pacing protocol, the round trip transmission latency for single-hop satellite was longer than the time it took to transmit the full "window" on a 56kbit link, exhausting the window and it would stop transmitting while it waited until it received a returning "ACK". It turns out that even a short-haul terrestrial T1 (1.5mbits) round-trip was longer than it took to transmit the full "window" (contributing to communication group never supporting more than 56kbit links).

At the time I had satellite T1 between Los Gatos lab (on west coast) and Clementi's E&S lab in IBM Kingston (on east coast) capable of full T1 sustained throughput.

Late 80s, communication group eventually came out with 3737 supporting a single short-haul terrestrial T1 link ... by spoofing the host VTAMs. It had a boatload of memory and M68K processors simulating local VTAM CTCA to the host and would immediately do a "ACK" and then transmit the actual data (host VTAM believing it was already received) in the background (as countermeasure to SNA/VTAM anemic window pacing implementation).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
HASP, ASP, JES2, JES3, NJE posts
https://www.garlic.com/~lynn/submain.html#hasp

posts mentioning STL/Hursley double-hop satellite link
https://www.garlic.com/~lynn/2023b.html#57 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022f.html#38 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2006s.html#17 bandwidth of a swallow (was: Real core)
https://www.garlic.com/~lynn/2002q.html#35 HASP:

posts mentioning 3737
https://www.garlic.com/~lynn/2023d.html#31 IBM 3278
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#77 IBM HSDT Technology
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023.html#103 IBM ROLM
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2019d.html#117 IBM HONE
https://www.garlic.com/~lynn/2019c.html#35 Transition to cloud computing
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2017i.html#35 IBM Shareholders Need Employee Enthusiasm, Engagemant And Passions
https://www.garlic.com/~lynn/2017g.html#35 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017.html#57 TV Show "Hill Street Blues"
https://www.garlic.com/~lynn/2016b.html#82 Qbasic - lies about Medicare
https://www.garlic.com/~lynn/2015g.html#42 20 Things Incoming College Freshmen Will Never Understand
https://www.garlic.com/~lynn/2015e.html#2 Western Union envisioned internet functionality

--
virtualization experience starting Jan1968, online at home since Mar1970

Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

From: Lynn Wheeler <lynn@garlic.com>
Subject: Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
Date: 07 July, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023d.html#118 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#119 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023d.html#120 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET

GML History
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm

... note "GML" was chosen because "G", "M", "L" were first letters of inventors last name.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
gml/sgml posts
https://www.garlic.com/~lynn/submain.html#sgml

trivia: I had got a 360/67 "blue card" ... from "M"

360/67 blue card

--
virtualization experience starting Jan1968, online at home since Mar1970



--
previous, next, index - home