List of Archived Posts

2024 Newsgroup Postings (06/12 - 07/29)

time-sharing history, Privilege Levels Below User
time-sharing history, Privilege Levels Below User
time-sharing history, Privilege Levels Below User
Disconnect Between Coursework And Real-World Computers
Disconnect Between Coursework And Real-World Computers
Disconnect Between Coursework And Real-World Computers
Disconnect Between Coursework And Real-World Computers
TCP/IP Protocol
TCP/IP Protocol
Benchmarking and Testing
Benchmarking and Testing
ATM, Mainframes, Tandem
ADA, FAA ATC, FSD
MVS/ISPF Editor
801/RISC
Mid-Range Market
REXX and DUMPRX
Private Equity Becomes Roach Motel as Public Pension Funds and Other Investors Borrow As Funds Remain Tied Up
Mid-Range Market
IBM Internal Network
NAS Hitachi 370 Clones
IBM CSC and MIT MULTICS
Early Computer Use
Obscure Systems in my Past
ARM is sort of channeling the IBM 360
IBM 23June1969 Unbundling Announcement
IBM 23June1969 Unbundling Announcement
STL Channel Extender
Do We Need Language to Think?
Future System and S/38
Future System and S/38
Future System and S/38
ancient OS history, ARM is sort of channeling the IBM 360
IBM 23June1969 Unbundling Announcement
ancient OS history, ARM is sort of channeling the IBM 360
Null terminated strings, Disconnect Between Coursework And Real-World Computers
This New Internet Thing, Chapter 8
Chat Rooms and Social Media
GISH GALLOP
ancient OS history, ARM is sort of channeling the IBM 360
ancient OS history, ARM is sort of channeling the IBM 360
ancient OS history, ARM is sort of channeling the IBM 360
GISH GALLOP
Chat Rooms and Social Media
Chat Rooms and Social Media
Economic Mess
GISH GALLOP
E-commerce
Architectural implications of locate mode I/O
REXX and DUMPRX
Architectural implications of locate mode I/O
Email Archive
Cray
16June1911, IBM Incorporation Day
Architectural implications of locate mode I/O
Architectural implications of locate mode I/O
Free and Open Source Software-and Other Market Failures
Seymour Cray and the Dawn of Supercomputing
Architectural implications of locate mode I/O and channels
Too-Big-To-Fail Money Laundering
16June1911, IBM Incorporation Day
Architectural implications of locate mode I/O
360/65, 360/67, 360/75 750ns memory
360/65, 360/67, 360/75 750ns memory
360/65, 360/67, 360/75 750ns memory
360/65, 360/67, 360/75 750ns memory
360/65, 360/67, 360/75 750ns memory
A Timeline of Mainframe Innovation
ARPANET & IBM Internal Network
ARPANET & IBM Internal Network
ARPANET & IBM Internal Network
ARPANET & IBM Internal Network
IBM "Winchester" Disk
GOSIP
Some Email History
Joe Biden Kicked Off the Encryption Wars
Some work before IBM
Other Silicon Valley
Other Silicon Valley
Other Silicon Valley
IBM ATM At San Jose Plant Site
APL and REXX Programming Languages
APL and REXX Programming Languages
Continuations
ATT/SUN and Open System Foundation
ATT/SUN and Open System Foundation
ATT/SUN and Open System Foundation
Benchmarking and Testing
Computer Virtual Memory
John Boyd and IBM Wild Ducks
Computer Virtual Memory
Computer Virtual Memory
Computer Virtual Memory
Why Bush Invaded Iraq
Mainframe Integrity
Mainframe Integrity
Mainframe Integrity
Mainframe Integrity
Why Bush Invaded Iraq
Interdata Clone IBM Telecommunication Controller
Chipsandcheese article on the CDC6600
Chipsandcheese article on the CDC6600
Chipsandcheese article on the CDC6600
IBM 360/40, 360/50, 360/65, 360/67, 360/75
What happens to old MAC assignments?
Biggest Computer Goof Ever
Private Equity
Biggest Computer Goof Ever
Time to retire the phrase 'Military Industrial Complex'
Time to retire the phrase 'Military Industrial Complex'
IBM 3705 & 3725
GNOME bans Manjaro Core Team Member for uttering "Lunduke"
43 years ago, Microsoft bought 86-DOS and started its journey to dominate the PC market
... some 3090 and a little 3081

time-sharing history, Privilege Levels Below User

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: time-sharing history, Privilege Levels Below User
Newsgroups: comp.arch
Date: Wed, 12 Jun 2024 11:54:55 -1000
John Levine <johnl@taugh.com> writes:
In any event, I'd find the second article I linked to, the VM history written by IBMers who were there, more credible than some random third party magazine. CMS really was written at the same time as CP, and they always intended them to work together as a time-sharing system.

Some of the MIT CTSS/7094 people went to the 5th flr to do Multics; others went to the science center on the 4th flr to do virtual machines, internal network, invent GML in 1969, other interactive applications.

cambridge science center wanted a 360/50 to add virtual memory to ... but all the spare 360/50s were going to FAA ATC project ... and they had to settle for 360/40. (virtual machine) CP/40 (running on bare hardware using hardware virtual memory mods _ was developed in parallel with CMS (running on bare 360/40). When CP/40 virtual machines was operational, they then could run CMS in CP/40 virtual machines.

Melinda history
https://www.leeandmelindavarian.com/Melinda#VMHist
and CP/40
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf
my OCR from Comeau's original paper
https://www.garlic.com/~lynn/cp40seas1982.txt

CP/40 morphs into CP/67 when 360/67 standard with virtual memory becomes available. I was responsible for OS/360 running on 360/67 (as 360/65), univ shutdown datacenter on weekends and I had datacenter dedicated for 48hrs straight). CSC came out Jan1968 to install CP/67 (3rd install after CSC itself and MIT Lincoln Labs) ,,, and I mostly played with it during my weekend dedicated time. First couple months was rewriting pathlengths for running OS/360 in virtual machine. Benchmark was OS/360 jobstream that ran 322secs on real machine. Started out 858secs in virtual machine (CP67 CPU 534secs) .... after few months got CP67 CPU down to 113secs. I then rewrite time-sharing system scheduling and dispatching, page I/O and page replacement, I/O arm scheduling, etc.

I'v joked that original CP/67 scheduling delivered to univ (and I completely replaced) ... looked a lot like Unix scheduling that I first saw 15yrs later. Also 1st install at univ (jan1968) had CP67 source in OS/360 datasets ... it wasn't until a few months later that they moved source to CMS files. After I graduated and joined science center, one of my hobbies was enhanced production operating systems for internal datacenters.

CP-67
https://en.wikipedia.org/wiki/CP-67
CP/CMS
https://en.wikipedia.org/wiki/CP/CMS
History of CP/CMS
https://en.wikipedia.org/wiki/History_of_CP/CMS
Cambridge Scientific Center
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

when it was decided to add virtual memory to all 370s, it was also decided to rewrite CP67 for VM370, simplifying and/or dropping lots of features (also renaming Cambridge Monitor System to Conversational Monitor System and crippling its ability to run on real machine).

1974, I start migrating lots of original CP67 stuff (including lots that I had done as undergraduate) to VM370 Release2 base for an enhanced internal CSC/VM (including for world-wide online sales&marketing support HONE systems). Then in 1975 I upgrade to VM370 Release3 base and add the CP67 multiprocessor support (one of the things dropped in CP67->VM370) ... originally for US consolidated HONE complex so they could add 2nd processor to each of their systems (all the US HONE systems had been consolidated in Palo Alto, trivia: when FACEBOOK 1st moved into silicon valley, it was into new bldg built next door to the former US consolidated HONE datacenter).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

time-sharing history, Privilege Levels Below User

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: time-sharing history, Privilege Levels Below User
Newsgroups: comp.arch
Date: Wed, 12 Jun 2024 15:19:14 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
I recall CMS was single-user to start with, and the point of running it under "CP" aka "VM" was to offer a multi-user service. Did CMS ever become multi-user in its own right?

re:
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User

over years relying more & more on CP kernel services, no multi-user ... but did get multitasking
https://www.ibm.com/docs/en/zvm/7.3?topic=cms-application-multitasking
https://www.ibm.com/docs/en/zvm/7.3?topic=programming-zvm-cms-application-multitasking
https://www.vm.ibm.com/pubs/redbooks/sg245164.pdf

original CMS that could run on real hardware support SIO and channel programs for file i/o ... a CP "diagnose" function for CMS file i/o was added to CP/67 that ran purely synchronous (didn't return to CMS until file I/O was completed) ... in transition to VM370, CMS went purely for CP "diagnose" (and SIO capability was eliminated).

When I joined science center and also saw the virtual memory file support by MULTICS ... I figured I could do one for CMS ... that scaled up faster than the normal file I/O operation ... and I claimed I learned what not to do for a page-mapped filesystem from TSS/360 (part of TSS/360 was just memory mapped the filesystem then mostly faulted in pages ... while I did combination of memory mapping and pre-fetching, read-ahead and write-behind support).

Some of the IBM Future System issues was specifying a TSS/360-like filesystem ... one of the last nails in the FS coffin was study that showed if 370/195 applications were ported to FS machine made out of the fastest available hardware, it would have throughput of 370/145 (about 30 times slowdown ... part of it was serialization of file i/o).

Some existing FS descriptions talk about how FS lived on with S/38 ... for entry-level business operation ... there was sufficient hardware performance provide necessary throughput for the s/38 market.

In any case, the FS implosion contributed to memory mapped filesystem implementations acquiring very bad reputation inside IBM. In 1980s, I could show that heavily loaded, high-end systems with 3380 (3mbyte/sec disks) running my page-mapped CMS filesystem had at least three times the sustained throughput of standard CMS filesystems,

some FS
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

trivia: my brother was regional Apple rep (largest physical area CONUS) and when he came into town, I could be invited to business dinners and argue MAC design (even before MAC announced). He also figured out how to remotely dial into the S/38 that ran Apple to monitor manufacutring and delivery schedules.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm

posts mention my brother regional apple rep
https://www.garlic.com/~lynn/2024c.html#36 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2023f.html#99 Vintage S/38
https://www.garlic.com/~lynn/2023c.html#22 IBM Downfall
https://www.garlic.com/~lynn/2022c.html#29 Unix work-alike
https://www.garlic.com/~lynn/2021k.html#43 Transaction Memory
https://www.garlic.com/~lynn/2021h.html#48 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021c.html#89 Silicon Valley
https://www.garlic.com/~lynn/2021b.html#68 IBM S/38
https://www.garlic.com/~lynn/2021b.html#7 IBM & Apple
https://www.garlic.com/~lynn/2019c.html#2 S/38, AS/400
https://www.garlic.com/~lynn/2018f.html#118 The Post-IBM World
https://www.garlic.com/~lynn/2018f.html#49 PC Personal Computing Market
https://www.garlic.com/~lynn/2018b.html#12 Soon, the Only Alternatives to Windows Server will be open-source
https://www.garlic.com/~lynn/2017g.html#66 Is AMD Dooomed? A Silly Suggestion!
https://www.garlic.com/~lynn/2017g.html#29 Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)
https://www.garlic.com/~lynn/2015h.html#66 IMPI (System/38 / AS/400 historical)
https://www.garlic.com/~lynn/2014g.html#97 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2013d.html#55 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2013b.html#3 New HD
https://www.garlic.com/~lynn/2012d.html#23 IBM cuts more than 1,000 U.S. Workers
https://www.garlic.com/~lynn/2011c.html#14 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010q.html#32 IBM Future System
https://www.garlic.com/~lynn/2010p.html#12 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2010j.html#80 Idiotic programming style edicts
https://www.garlic.com/~lynn/2007u.html#3 folklore indeed
https://www.garlic.com/~lynn/2007m.html#63 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
https://www.garlic.com/~lynn/2003d.html#66 unix

--
virtualization experience starting Jan1968, online at home since Mar1970

time-sharing history, Privilege Levels Below User

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: time-sharing history, Privilege Levels Below User
Newsgroups: comp.arch
Date: Wed, 12 Jun 2024 16:13:03 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
So what did you think of it? The original hardware architecture was heavily centred around the 60.15Hz video refresh. Each refresh interval, 21888 bytes were read out of the video buffer (for the 512×342 display), and 740 bytes were read out of the sound buffer to go to the speaker.

re:
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#1 time-sharing history, Privilege Levels Below User

biggest issue was what I characterized as kitchen table "only" with no business uses ... desktop publishing was somewhat inbetween (visicalc wasn't supposedly part of it)... at a time when large corporations ordering tens of thousands of IBM/PC with 3270 terminal emulation ... single desktop footprint doing both mainframe terminal and increasing kinds of local processing.

later IBM co-worker left and did some work for Apple using Cray with 100mbyte/sec high-end graphics ... could be used to simulate various processor and graphic performance ... part of the joke that Cray used Apple to design Cray machines and Apple used Cray machine to design Apple machines.

some history
https://arstechnica.com/features/2005/12/total-share/
https://arstechnica.com/features/2005/12/total-share.ars/2
https://arstechnica.com/features/2005/12/total-share.ars/3
https://arstechnica.com/features/2005/12/total-share.ars/4
https://arstechnica.com/features/2005/12/total-share.ars/5
https://arstechnica.com/features/2005/12/total-share.ars/6
https://arstechnica.com/features/2005/12/total-share.ars/7
https://arstechnica.com/features/2005/12/total-share.ars/8
https://arstechnica.com/features/2005/12/total-share.ars/9
https://arstechnica.com/features/2005/12/total-share.ars/10

--
virtualization experience starting Jan1968, online at home since Mar1970

Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Thu, 13 Jun 2024 13:15:21 -1000
Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
The big boss at my first POE was reluctant to switch from cards to disks because "how can you be sure the data is really there if you can't see the holes?"

May2008 at Berkeley (a year after he disappears on sailing trip), there was a gathering to celebrate Jim Gray. Part of that Gray celebration involved acknowledging Jim Gray as father of (modern) financial dataprocessing (including enabling electronic payment transactions). Jim's formalizing of transaction semantics provided the basis that was crucial in allowing financial auditors to move from requiring paper ledgers to trusting computer operations (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html
also transaction benchmarking
http://www.tpc.org/information/who/gray5.asp

after transfering to SJR on west coast in 70s, I worked with Jim Gray and Vera Watson on original SQL/relational RDBMS (System/R). Jim leaves IBM for Tandem fall of 1980 ... recent AFC posts
https://www.garlic.com/~lynn/2024c.html#110 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024c.html#111 Anyone here (on news.eternal-september.org)?

System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
"Tandem Memos" and online computer computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

posts mentioning Jim's celebration
https://www.garlic.com/~lynn/2024.html#79 Benchmarks
https://www.garlic.com/~lynn/2023c.html#37 Global & Local Page Replacement
https://www.garlic.com/~lynn/2022g.html#27 Why Things Fail
https://www.garlic.com/~lynn/2022d.html#37 IBM Business Conduct Guidelines
https://www.garlic.com/~lynn/2022d.html#25 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#13 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021c.html#39 WA State frets about Boeing brain drain, but it's already happening
https://www.garlic.com/~lynn/2019b.html#13 Tandem Memo
https://www.garlic.com/~lynn/2018d.html#70 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017f.html#37 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017d.html#39 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2016g.html#91 IBM Jargon and General Computing Dictionary Tenth Edition
https://www.garlic.com/~lynn/2016g.html#23 How to Fix IBM
https://www.garlic.com/~lynn/2016b.html#48 Windows 10 forceful update?
https://www.garlic.com/~lynn/2014k.html#2 Flat (VSAM or other) files still in use?
https://www.garlic.com/~lynn/2014f.html#69 Is end of mainframe near ?
https://www.garlic.com/~lynn/2014e.html#24 Tandem Memos
https://www.garlic.com/~lynn/2013m.html#45 Why is the mainframe so expensive?
https://www.garlic.com/~lynn/2013g.html#24 Old data storage or data base
https://www.garlic.com/~lynn/2012p.html#64 IBM Is Changing The Terms Of Its Retirement Plan, Which Is Frustrating Some Employees
https://www.garlic.com/~lynn/2012p.html#28 Some interesting post about the importance of Security and what it means for the Mainframe
https://www.garlic.com/~lynn/2011l.html#32 Selectric Typewriter--50th Anniversary
https://www.garlic.com/~lynn/2011e.html#80 Which building at Berkeley?
https://www.garlic.com/~lynn/2010n.html#85 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2010m.html#21 Mainframe Hall of Fame (MHOF)
https://www.garlic.com/~lynn/2010m.html#13 Is the ATM still the banking industry's single greatest innovation?
https://www.garlic.com/~lynn/2009r.html#4 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009o.html#51 8 ways the American information worker remains a Luddite
https://www.garlic.com/~lynn/2009m.html#78 ATMs by the Numbers
https://www.garlic.com/~lynn/2008p.html#27 Father Of Financial Dataprocessing
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First

--
virtualization experience starting Jan1968, online at home since Mar1970

Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Thu, 13 Jun 2024 16:05:47 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Was that the name of the research project? Then when it was commercialized, the product became known as "SQL/DS"?

Later succeeded by IBM's all-singing, all-dancing DBMS product, DB2.

I had to do a DB2 setup a few months ago. As with any IBM product it seems, you soon find yourself asking "why did they make things so complicated?" ...


re:
https://www.garlic.com/~lynn/2024d.html#3 Disconnect Between Coursework And Real-World Computers

This System/R mentions first customer was Pratt & Whitney
https://en.wikipedia.org/wiki/IBM_System_R

but there was also Bank of American that was getting 60 VM/4341 systems for distributed operation (one of the things Jim palm off on me when he leaves for Tandem, was supportting BofA).

System/R ... we were able to do tech transfer to Endicott ("under the radar" while the company was preoccupied with the next great new DBMS "EAGLE") for SQL/DS (although Endicott cut back a little, there were some enhancements to VM370 and Endicott wanted to be able to ship SQL/DS w/o needing any VM370 changes).

Then when "EAGLE" implodes there was request for how fast could System/R be ported to MVS ... which is eventually released as DB2 ... originally for decision/support *only*.

Last product at IBM was HA/6000 starting in late 80s, originally for NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (oracle, sybase, informix, ingres) that had VAXCluster base in same source base with UNIX.

IBM Toronto had just gotten ("Shelby") project to implement a OS2 RDBMS in C ... while it would be portable ... but it was far from industrial strenth DBMS ... it took some years before it had availability and scale-up features as well as (mainframe) DB2 compatibility and (also) (re)branded DB2.

Jim had done a study at Tandem that commodity hardware had gotten at lot more reliable and major service outages were becoming other factors like environmental (hurricanes, earthquakes, floods, etc).
https://www.garlic.com/~lynn/grayft84.pdf

So one of the factors worked on was geographical distributed operation, out marketing I coined the terms disaster survivability and geographic survivability ... and the IBM S/88 Product Administrator starting taking me around to their customes as well as got me to write a section for the IBM Corporate Continuous Available Strategy document ... however it got pulled when both Rochester (AS/400) and POK (mainframe) complained (that they couldn't meet the requirements).

Early Jan1992, there was meeting with Oracle CEO on cluster scale-up where AWD/Hester says that we will have 16 processor clusters by mid-92 and 128 processor clusters by ye-92. In that meeting were a few local IBMers and a number of other Oracle people ... one was Oracle SVP who like to mention that when he was at IBM STL, he was the person that handled most of the tech transfer to STL for MVS DB2.

However, by end of January, cluster scale-up had been transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we were told that we couldn't work on anything with more than four processors, we leave IBM a few months later. Contributing was the mainframe DB2 were complaining that if we were allowed to continue, it would be years ahead of them.

A couple years later, I was brought in as a consultant for a small client/server, two former Oracle people (that had been in the HA/CMP Oracle CEO scale-up meeting) were there responsible for something called "commerce server" and wanted to do financial transactions on the server; the startup had also invented this technology they called "SSL" they wanted to use for transactions; it is sometimes now called "electronic commerce". I had responsibility for everything between webservers and financial industry payment networks.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
Payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

posts mentioning Toronto "Shelby" RDBMS for OS2
https://www.garlic.com/~lynn/2023e.html#86 Relational RDBMS
https://www.garlic.com/~lynn/2023e.html#42 Systems Network Architecture
https://www.garlic.com/~lynn/2022b.html#62 IBM DB2
https://www.garlic.com/~lynn/2021b.html#38 HA/CMP Marketing
https://www.garlic.com/~lynn/2010n.html#82 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2009f.html#58 Opinion: The top 10 operating system stinkers
https://www.garlic.com/~lynn/2008l.html#57 No offense to any one but is DB2/6000 an old technology. Does anybody still use it, if so what type of industries??
https://www.garlic.com/~lynn/2007j.html#12 Newbie question on table design
https://www.garlic.com/~lynn/2006w.html#13 IBM sues maker of Intel-based Mainframe clones
https://www.garlic.com/~lynn/2005u.html#41 Mainframe Applications and Records Keeping?

--
virtualization experience starting Jan1968, online at home since Mar1970

Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jun 2024 07:41:23 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
I remember around the 1980s or so, IBM did try to sell some models of System/390 (was it called that then?) as "supercomputers". I'm not sure if they were ever competitive. IBM was still seen as a marketing behemoth then.

However, their POWER architecture has been much more successful in this regard, and there are a few POWER-based machines in respectable places in the Top500 list.


I had done some work with the 3033 processor engineers, once 3033 was out the door, the 3033 engineers started on 3090. Marketing wanted vector added to 3090 to sell into numerical intensive market. The processor engineers were a little upset because they had optimized floating point so it ran as fast as memory (claiming that in the past floating point was so much slower than memory that memory could keep multiple floating units constantly feed, resulting in vector).

I had HSDT project that had T1 and faster computer links (both terrestrial and satellite) and one of the first was T1 satellite link between Los Gatos lab on the west coast and clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (on the east coast) which had several floating point system boxes which had 40mbyte/sec disk arrays to keep up with numerical intensive processing (IBM still had 3mbyte/sec channels and no disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
https://en.wikipedia.org/wiki/Floating_Point_Systems#History
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.

Was also working with NSF Director and was suppose to get $20M to interconnect the NSF Supercomputer centers ... then congress cuts the budget, some other things happen and eventually an RFP is released (in part based on what we already had running), from 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

previous post (in this thread) mentions last product we did at IBM was HA/CMP and cluster scale-up (large numbers of POWER RS/6000)
https://www.garlic.com/~lynn/2024d.html#4

end of Jan1992, cluster scale-up was transferred for announce as IBM Supercomputer for technical/scientific *ONLY* and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later).

Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2
and 15jun1992, Foray into Mainstream for Parallel Computing
https://www.garlic.com/~lynn/2001n.html#6000clusters3

as to any "surprise" ... more than decade previous (Jan1979), I was con'ed into doing a benchmark on IBM 4341 for national lab that was looking at getting 70 4341s for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

posts mentioning jan1979 4341 benchmark for national lab
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#91 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018d.html#42 Mainframes and Supercomputers, From the Beginning Till Today
https://www.garlic.com/~lynn/2018b.html#49 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2018.html#46 VSE timeline [was: RE: VSAM usage for ancient disk models]
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2016h.html#51 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2015h.html#106 DOS descendant still lives was Re: slight reprieve on the z
https://www.garlic.com/~lynn/2015h.html#71 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#37 History--computer performance comparison chart
https://www.garlic.com/~lynn/2014c.html#61 I Must Have Been Dreaming (36-bit word needed for ballistics?)
https://www.garlic.com/~lynn/2013c.html#53 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#38 DEC/PDP minicomputers for business in 1968?
https://www.garlic.com/~lynn/2012n.html#45 Under what circumstances would it be a mistake to migrate applications/workload off the mainframe?
https://www.garlic.com/~lynn/2011d.html#40 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011c.html#65 Comparing YOUR Computer with Supercomputers of the Past
https://www.garlic.com/~lynn/2009r.html#37 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009l.html#67 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2007d.html#62 Cycles per ASM instruction
https://www.garlic.com/~lynn/2006y.html#21 moving on
https://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2002i.html#22 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?

--
virtualization experience starting Jan1968, online at home since Mar1970

Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Fri, 14 Jun 2024 08:14:08 -1000
re:
https://www.garlic.com/~lynn/2024d.html#3 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024d.html#4 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024d.html#5 Disconnect Between Coursework And Real-World Computers

slightly related, 1988 the IBM branch office asks if I could help LLNL (national lab) get some fiber stuff they were playing with, standardized ... which quickly becomes fibre-channel standard (FCS, initially 1gbit/sec, full-duplex, 200mbye/sec aggregate). IBM then finally gets some of their stuff shipped with ES/9000 as ESCON (when it is already obsolete), 17mbytes/sec.

Later some IBM POK engineers become involved with FCS and define a heavy weight protocol that significantly cuts the native throughput, eventually released as FICON.

Latest public benchmarks I can find is z196 "Peak I/O" that got 2M IOPS with 104 FICON. About same time a FCS was announced for E5-2600 blades (commingly used in cloud megadatacenters) claiming over million IOPS (two such FCS have higher throughput than 104 FICON running over 104 FCS). As an asdie, IBM recommended that SAPs (system assist processors that do actual I/O) CPU should be limited to 70% ... which would be 1.5M IOPS. Also, no CKD DASD have been made for decades, all being simulated on industry standard fixed-block disks.

We also worked with LLNL to get thier Cray Unicos LINCS filesystem ported to HA/CMP as well as working with NCAR to get their supercomputer filesystem (Mesa Archival) ported to HA/CMP.

FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

some old posts mentioning LLNL LINCS and NCAR Mesa Archival
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2011n.html#34 Last Word on Dennis Ritchie
https://www.garlic.com/~lynn/2006u.html#27 Why so little parallelism?

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP Protocol

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: TCP/IP Protocol
Date: 14 Jun, 2024
Blog: Facebook
My wife was co-author of AWP39, peer-to-peer networking architecture ... in the period when SNA 1st appeared ... they had to qualify with "peer-to-peer" because SNA wasn't a System, wasn't a Network and wasn't a Architecture ... and had "co-opted" the term "network".

In the 80s, the communication group was starting to fiercely fight off client/server and distributed computing trying to preserve their dumb terminal paradigm. Early 80s, I got HSDT project, T1 and faster computer links (both terrestrial and satellite) and was periodically in battles with the communication group because they were cap'ed at 56kbits/sec. Mid-80s, communication group prepared analysis for the corporate executive committee explaining why customers wouldn't be interested in T1 until well into the 90s. They showed how "fat-pipes" (parallel 56kbit links treated as single logical link) dropped to zero by seven links (aka 392kbits). What they didn't know (or didn't want to tell the executive committee) was that typical T1 telco tariff in the period was about the same as six 56kbit links. We did a trivial customer survey and found 200 IBM mainframe customers with T1 links ... they just moved to non-IBM hardware and non-IBM software.

HSDT had some custom hardware being built on the other side of the pacific. The friday before leaving for a visit, got email from Raleigh announcing a new online forum about high-speed links with follow definition:
low-speed: 9.6kbits/sec medium speed: 19.2kbits/sec high-speed: 56kbits/sec very high-speed: 1.5mbits/sed

monday morning on wall of conference room on the other side of the pacific:
low-speed: <20mbits/sec medium-speed: 100mbits/sec high-speed: 200mbits-300mbits/sec very high-seed >600mbits/sec

Later I was on Chesson's XTP technical advisory board (that communication tried to block and lost) where I wrote into the specification "rate-based" pacing ... which we had done when 1st starting HSDT. Since there was some military agencies involved, took XTP as HSP (High-Speed Protocol) to ISO chartered US ANSI X3S3.3 (standards group for networking and transport layer 3&4 protocols. It was eventually rejected, telling us that ISO required standards work conform to the OSI MODEL, XTP failed because 1) supported internetworking protocol (sits between level 3&4), 2) by[assed layer 3/4 interface and 3) went directly to LAN MAC interface (which doesn't exist in OSI model, sitting somewhere in middle of layer 3).

Communication group was also fighting off release of mainframe TCP/IP support. When they lost, they changed tactic and said that since they had corporate responsibility for everything that cross the datacenter walls, it had to be released through them; what shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 processor. I then did changes for RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341 got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times increase in bytes moved per instruction executed).

Communication group sort of capitulates and comes out with 3737 for T1 links, it had boat loads of memory and Motorola 68K processors simulating local CTCA attached mainframe VTAM, immediately ACKing received packets ... and then using real protocol for T1 link to remote 3737. Problem was even on short haul terrestrial T1, (mainframe) VTAM window pacing had exhausted maximum unACKed transmitted packet limit, before ACKs started arriving (it required faking ACKs to try and keep packets flowing). However, the box's peak best was still about 2mbit/sec (US T1 full-duplex 3mbits, EU T1 full-duplex 4mbits).

Note also fighting off client/server and distributed computing it severely kneecapped PS2 microchannel cards. AWD (workstation) had done its own 4mbit token-ring card for PC/RT (16bit PC/AT bus), but for microchannel RS/6000, corporate told AWD they couldn't do their own cards but had to use the PS2 cards. Turns out the PS2 microchannel 16mbit token-ring cards had lower card throughput than the PC/RT 4mbit token-ring cards (the other microchannel cards had been also been severely kneecapped). The new Almaden bldg had been heavily wired assuming 16mbit T/R. However they found 10mbit ethernet (over the same wiring) had higher aggregate throughput and lower latency ... besides $69 10mbit ethernet cards having much higher card throughput than $800 16mbit T/R cards (for the difference in the cost of ENET and T/R cards, they could put in several high-speed TCP/IP routers that each had IBM channel interfaces and 16 10mbit ENET interfaces (operating at 8.5mbit sustained, each router capable of 400mbit/sec), and also had T1 and T3 options.

Late 80s, a senior disk engineer got a talk scheduled at communication group, world-wide, internal, annual conference supposedly on 3174 performance ... but opens the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing the mainframe to more distributed computing friendly platforms. The disk division had come up with a number of solutions but they were constantly vetoed by the communication group (with their corporate strategic ownership of everything that crossed datacenter walls). One of the disk division partial work-arounds was investing in distributed computing startups that would use IBM disks ... and the disk executive would periodically ask us to drop by his investments to see if we could help.

However, communication group datacenter stranglehold wasn't just disks and a couple years later, IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left the company, but get call from the bowels of (corp hdqtrs) Armonk asking us to help with the corporate breakup. Before we get started, the board brings in the former AMEX president as CEO who (somewhat) reverses the breakup (but it wasn't long before the disk division was "divested").

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
posts mentioning communication group stranglehold on mainframe datacenters
https://www.garlic.com/~lynn/subnetwork.html#cmc
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX, Private Equity, IBM related Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner

some posts mentioning AWP39
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#30 ACP/TPF
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#43 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network
https://www.garlic.com/~lynn/2019d.html#119 IBM Acronyms
https://www.garlic.com/~lynn/2018e.html#2 Frank Heart Dies at 89
https://www.garlic.com/~lynn/2018e.html#1 Service Bureau Corporation
https://www.garlic.com/~lynn/2018b.html#13 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017e.html#62 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#29 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2017c.html#55 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#124 Early Networking
https://www.garlic.com/~lynn/2016d.html#48 PL/I advertising
https://www.garlic.com/~lynn/2015h.html#99 Systems thinking--still in short supply
https://www.garlic.com/~lynn/2015g.html#96 TCP joke
https://www.garlic.com/~lynn/2014m.html#25 Microsoft Open Sources .NET, Saying It Will Run on Linux and Mac
https://www.garlic.com/~lynn/2014e.html#15 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#99 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#26 SNA vs TCP/IP
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013g.html#44 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012o.html#52 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012m.html#24 Does the IBM System z Mainframe rely on Security by Obscurity or is it Secure by Design
https://www.garlic.com/~lynn/2012k.html#41 Cloud Computing
https://www.garlic.com/~lynn/2012k.html#23 How to Stuff a Wild Duck
https://www.garlic.com/~lynn/2012i.html#25 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012h.html#17 Hierarchy
https://www.garlic.com/~lynn/2012c.html#41 Where are all the old tech workers?
https://www.garlic.com/~lynn/2011n.html#2 Soups
https://www.garlic.com/~lynn/2011m.html#6 What is IBM culture?
https://www.garlic.com/~lynn/2011l.html#26 computer bootlaces
https://www.garlic.com/~lynn/2010q.html#73 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2010e.html#5 What is a Server?
https://www.garlic.com/~lynn/2010d.html#62 LPARs: More or Less?

--
virtualization experience starting Jan1968, online at home since Mar1970

TCP/IP Protocol

From: Lynn Wheeler <lynn@garlic.com>
Subject: TCP/IP Protocol
Date: 14 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol

For HSDT, was also working with NSF director and was suppose to get $20M to interconnect NSF Supercomputer centers ... then congress cuts the budget, some other things happen and finally an RFP is released (in part based on what we already had running). From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

from one of the science center people that invented GML in 1969 (after a decade it morphs into ISO standard SGML, after another decade it morphs into another decade at CERN):
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

trivia: the first webserver in the US was on Stanford SLAC VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

Co-worker at the science center was responsible for the CP67 wide-area network (which morphs into the corporate network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s). technology also used for the corporate sponsored univ. BITNET (& EARN)
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

Edson
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
gml, sgml, html posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (& earn) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Benchmarking and Testing

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Benchmarking and Testing
Date: 14 Jun, 2024
Blog: Facebook
When I transfer out to SJR in the 70s, I get to wander around datacenters in silicon valley, including bldg14&15 across the street (disk engineering and product test). They are running prescheduled, around the clock, stand alone testing. They mentioned that they had recently tried MVS, but it had 15min mean-time-betweeen failures in that environment. I offer to rewrite I/O supervisor to make it bullet proof and never fail so they can do any amount of ondemand concurrent testing, greatly improving productivity. I do a internal-only research report and happen to mention the MVS 15min MTBF, bringing down the wrath of the MVS organization on my head. A few years later, when 3380/3880 are about to ship, FE has 57 simulated errors that they expect to see. MVS is still failing for all 57 (requiring manual re-ipl) and in 2/3rds of the cases, no indication of what caused the failures (I wasn't sorry).

Earlier, after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters ... including bunch of stuff I had done as undergraduate in the 60s, including page replacement algorithms, dynamic adaptive scheduling and resource management, and other stuff (world-wide, online sales&marketing support HONE systems were long time customers). As part of that I had done automated benchmarking with synthetic workloads, being able to specify configurations and kinds and amount of workloads (also involved automated rebooting/re-ipl between each benchmark). After the decision to add virtual memory to all 370s, the decision was also made to rewrite CP67 for VM370 (involved simplifying or dropping lots of features). In 1974, I start migrating lots of missing features from CP67 to a VM370 Release2-based system .... starting with automated benchmarking support. However, initially VM370 was unable to complete set of benchmarks w/o crashing ... so the next was a lot of CP67 integrity features in order to complete set of benchmarks ... before starting on further enhancements and performance work for my internal CSC/VM.

The 23June1969 unbundling announcement included starting to charge for SE services, maintenance, and (application) software (but managed to make the case kernel software was still free). In the early 70s, there was the Future System effort that was completely different from 370 and was going to completely replace 370s (internal politics was killing off 370 efforts and the lack of new 370 during the period is credited with giving 370 clone makers their market foothold).
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

When FS implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 in parallel. Possibly because of the rise of 370 clone makers, there was also decision to start charging for kernel software ... and a bunch of my internal stuff was chosen for guinea pig (and I had to spend time with business planners and lawyers on kernel charging policies). At the science center, there was also a APL-based analytical system model done (was made available on HONE systems as the Performance Predictor where branch people could enter customer configuration and workloads and ask "what-if" questions about workload/configuration changes) ... which was integrated with the automated benchmarking. Before initial release, a 1000 benchmarks were specified that had uniform distribution of configurations and workloads, including extreme stress testing benchmarks. The modified "system model" would predict the result of a benchmark and then compare the prediction with the benchmark (validating the model and my dynamic adaptive support). A modified APL-model then selected configurations and workloads for another 1000 benchmarks (2000 total that took three months elapsed time to run) searching for anomalous combinations that my dynamic adaptive scheduling and resource management might have problems with.

tank trivia: I had been introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. Boyd is credited with the Desert Storm left hook, but there are all sorts of excuses why the Abrams weren't in place to catch the retreating Republican Guards. Add to excuses is possibly Boyd just used the rated Abrams speed but didn't realize how tightly tied Abrams were to supply and maintenance.

Boyd posts and URL references
https://www.garlic.com/~lynn/subboyd.html
playing disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
my internal CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE & APL posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
benchmarking posts
https://www.garlic.com/~lynn/submain.html#benchmark

--
virtualization experience starting Jan1968, online at home since Mar1970

Benchmarking and Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Benchmarking and Testing
Date: 14 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing

other trivia: After initial set of CP67->VM370 changes for CSC/VM, I start migrating SMP, tightly-coupled support to a VM370 Release3-based CSC/VM system, initially for the consolidated US HONE datacenter to add a 2nd processor to each of their systems (for 16 processors total). I then get talked into helping with a 16-processor tightly-coupled SMP and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168-3 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK that it could be decades before POK favorite son operating system (MVS) had effective 16-processor support (at the time, MVS documentation had 2-processor was 1.2-1.5 throughput of single processor). He then invites some of us to never visit POK again and directs 3033 processor engineers heads down on 3033 with no distractions (POK doesn't ship a 16-processor SMP until after turn of century). At HONE, I was getting twice the throughput with combination of highly efficient SMP as well as a kind of cache-affinity (improving cache hit ratio).

Head of POK also convinces corporate to kill the vm370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission but has to recreate a development group from scratch). I also transfer out to the west coast

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

ATM, Mainframes, Tandem

From: Lynn Wheeler <lynn@garlic.com>
Subject: ATM, Mainframes, Tandem.
Date: 16 Jun, 2024
Blog: Facebook
Late 70s, IBM SE in LA on large savings bank account, re-implemented ATM support under VM/370 running on 370/158 that outperformed TPF running on 370/168. He had implemented more sophisticated transaction and disk arm scheduling algorithms that included taking into account time-of-day and past activity patterns to better organize transaction scheduling to improve transactions per disk arm sweep.

past refs:
https://www.garlic.com/~lynn/2016.html#65 Lineage of TPF
https://www.garlic.com/~lynn/2014i.html#53 transactions, was There Is Still Hope
https://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd

--
virtualization experience starting Jan1968, online at home since Mar1970

ADA, FAA ATC, FSD

From: Lynn Wheeler <lynn@garlic.com>
Subject: ADA, FAA ATC, FSD
Date: 16 Jun, 2024
Blog: Facebook
last product we did at IBM was HA/6000, started out for NYTimes to move their newspaper system (ATEX) off vaxcluster to rs/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres that had vaxcluster support in same source base with unix ... it would be years before there was any IBM with the necessary features). We did reviews of lots of failures and availability and was brought into review FAA future system and got to be good friends with the TA (that was spending 2nd shift programming Ada for the project) to FSD president. Then the S/88 Product Administrator started taking us around to their customers and got me to write a section for the corporate continuous availability strategy document (it got pulled when both Rochester/AS400 and POK/mainframe complained they couldn't meet the requirements).

Early Jan1992 had meeting with Oracle CEO where AWD/Hester tells them that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92 ... but by the end of Jan, cluster scale-up was transferred for announce as IBM Supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). Contributing was mainframe DB2 complaining if we were allowed to proceed, it would be years ahead of them.

Computerworld news 17feb1992 (from wayback machine) ... IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7
cluster supercomputer for technical/scientific *ONLY*
https://www.garlic.com/~lynn/2001n.html#6000clusters1
more news 11may1992, IBM "caught" by surprise
https://www.garlic.com/~lynn/2001n.html#6000clusters2
and 15jun1992, Foray into Mainstream for Parallel Computing
https://www.garlic.com/~lynn/2001n.html#6000clusters3

part of FAA ATC review was IBM triple redundant hardware made it unnecessary for software fault analysis ... review found some business process fault scenarios ... and software design had to be reset

didn't know Fox in IBM, but after leaving IBM did project with company he (and some other FSD people on FAA ATC) formed
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514
Two mid air collisions 1956 and 1960 make this FAA procurement special. The computer selected will be in the critical loop of making sure that there are no more mid-air collisions. Many in IBM want to not bid. A marketing manager with but 7 years in IBM and less than one year as a manager is the proposal manager. IBM is in midstep in coming up with the new line of computers - the 360. Chaos sucks into the fray many executives- especially the next chairman, and also the IBM president. A fire house in Poughkeepsie N Y is home to the technical and marketing team for 60 very cold and long days. Finance and legal get into the fray after that.

... snip ....

Executive Qualities
https://www.amazon.com/Executive-Qualities-Joseph-M-Fox/dp/1453788794
After 20 years in IBM, 7 as a divisional Vice President, Joe Fox had his standard management presentation -to IBM and CIA groups - published in 1976 -entitled EXECUTIVE QUALITIES. It had 9 printings and was translated into Spanish -and has been offered continuously for sale as a used book on Amazon.com. It is now reprinted -verbatim- and available from Createspace, Inc - for $15 per copy. The book presents a total of 22 traits and qualities and their role in real life situations- and their resolution- encountered during Mr. Fox's 20 years with IBM and with major computer customers, both government and commercial. The presentation and the book followed a focus and use of quotations to Identify and characterize the role of the traits and qualities. Over 400 quotations enliven the text - and synthesize many complex ideas.

... snip ...

CSC posts:
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts:
https://www.garlic.com/~lynn/submisc.html#cscvm
dynamic adaptive resource management, dispatching, scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

some posts mentioning FAA ATC and triple-redundant hardware and no need for software fault analysis
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2023f.html#84 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2021f.html#9 Air Traffic System

--
virtualization experience starting Jan1968, online at home since Mar1970

MVS/ISPF Editor

From: Lynn Wheeler <lynn@garlic.com>
Subject: MVS/ISPF Editor
Date: 17 Jun, 2024
Blog: Facebook
edit post from earlier this year
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors

Endicott instead of selecting one of the internal full-screen editors for release to customers, had the XEDIT effort. I wrote them a memo why they hadn't selected the (internal) "RED" to use for XEDIT ... it had more feature/function, much more mature, and more efficient code (almost the same as original line editor), etc. I got response back that it was obviously the RED author's fault that he developed it much earlier than XEDIT and it was much better, so it should be his responsibility to bring XEDIT up to level of RED. From 6jun1979 email (compare CPU secs to load large file for editing).
https://www.garlic.com/~lynn/2006u.html#email790606

EDIT CMSLIB MACLIB S 2.53/2.81
RED CMSLIB MACLIB S (NODEF) 2.91/3.12
ZED CMSLIB MACLIB S 5.83/6.52
EDGAR CMSLIB MACLIB S 5.96/6.45
SPF CMSLIB MACLIB S ( WHOLE ) 6.66/7.52
XEDIT CMSLIB MACLIB S 14.05/14.88


... I guess part of the issue was this wasn't long after "Future System" imploded
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

and head of POK manages to convince corporate to kill the vm370/cms product, shutdown the development group, and move all the people to POK for MVS/XA ... Endicott eventually manages to save the product mission (for the mid-range), but had to recreate development organization from scratch (and claims of upwards of 200 people in the ISPF organization).

In same time-frame, the 3274/3278 started shipping for 3272/3277 replacement ... 3278 had lots of the electronics moved back to the 3274 controller (to reduce 3278 manufacturing costs) significantly driving up the terminal coax protocol chatter and latency. 3272/3277 had (fixed) .086sec hardware response ... while 3274/3278 had .3-.5sec hardware response (depending about amount of data). This was also during the period when there were studies showing quarter sec trivial interactive response improved productivity (3272/3277 would meet the objective with .164 "system response" (.086+.164=.25sec) ... which was impossible to achieve with 3274/3278 (I actual had quite a few of my internal systems with .11sec system response, aka plus 3722/3277 .086sec gave .196sec trivial interactive response). Letter to 3278 Product Administrator was met with response that 3278 wasn't targeted for interactive computing, but data entry. Note issue didn't cropped up with MVS users ... since it was a rare MVS operation that could even achieve one second system response.

trivia: later with IBM/PC, 3277 terminal emulation card would have 3-5 times the upload/download throughput of a 3278 terminal emulation card

posts mentioning 3272/3277 comparison with 3274/3278
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019c.html#4 3270 48th Birthday
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2013l.html#25 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013k.html#19 spacewar
https://www.garlic.com/~lynn/2013b.html#55 Dualcase vs monocase. Was: Article for the boss
https://www.garlic.com/~lynn/2012i.html#74 HELP WITH PCOM - PASTE OPTION NOT WORKING CORRECTLY
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2011e.html#94 coax (3174) throughput
https://www.garlic.com/~lynn/2011d.html#53 3270 Terminal
https://www.garlic.com/~lynn/2011b.html#64 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010o.html#57 So why doesn't the mainstream IT press seem to get the IBM mainframe?
https://www.garlic.com/~lynn/2009q.html#50 The 50th Anniversary of the Legendary IBM 1401

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning using revenue from VM370 Performance Products to underwrite MVS ISPF:
https://www.garlic.com/~lynn/2024.html#108 IBM, Unix, editors
https://www.garlic.com/~lynn/2023.html#28 IBM Punch Cards
https://www.garlic.com/~lynn/2022e.html#63 IBM Software Charging Rules
https://www.garlic.com/~lynn/2022c.html#45 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2019e.html#126 23Jun1969 Unbundling
https://www.garlic.com/~lynn/2018d.html#49 What microprocessor is more powerful, the Z80 or 6502?
https://www.garlic.com/~lynn/2017i.html#23 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017g.html#34 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2017e.html#25 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#2 ISPF (was Fujitsu Mainframe Vs IBM mainframe)
https://www.garlic.com/~lynn/2014h.html#103 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014f.html#89 Real Programmers
https://www.garlic.com/~lynn/2013i.html#36 The Subroutine Call
https://www.garlic.com/~lynn/2012n.html#64 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2011p.html#106 SPF in 1978
https://www.garlic.com/~lynn/2010m.html#84 Set numbers off permanently
https://www.garlic.com/~lynn/2010g.html#50 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2010g.html#6 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2009s.html#46 DEC-10 SOS Editor Intra-Line Editing

--
virtualization experience starting Jan1968, online at home since Mar1970

801/RISC

From: Lynn Wheeler <lynn@garlic.com>
Subject: 801/RISC
Date: 17 Jun, 2024
Blog: Facebook
1980 there was project to move large number of CISC processors to 801/RISC ... controllers, entry/mid-range 370s (including 4361&4381 follow-on to 4331&4341), AS/400, etc ... for various reasons those efforts floundered and business returned to CISC business as usual.

There was the 801/RISC ROMP chip that was suppose to be for displaywriter follow-on ... when that got canceled, they decided to pivot to the unix workstation market and hired the company that had done AT&T UNIIX work for IBM/PC PC/IX which becomes "AIX" for "PC/RT. ACIS was working on doing UCB BSD Unix for 370 and were redirected to do it for the PC/RT as "AOS". Austin then start on 801/RISC RIOS chipset for the RS/6000.

The last product we did at IBM started out HA/6000 in the late 80s, originally for NYTimes to move their newspaper system (ATEX) from VAXCluster to RS/6000. I rename it HA/CMP when start doing cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had VAXCluster support in the same source base with UNIX. Lots of studies of availability and faults was showing that commodity hardware was getting more reliable and problems were increasingly becoming environmental (floods, hurricanes, earthquakes, etec) and needed to also support systems at geographically distributed locations. Out marketing, I coined the terms disaster survivability and geographic survivability and IBM S/88 Product Administrator started taking us around to their customers. The S/88 Product Administrator also got me to write a section for the corporate continuous available straegy document (but it got pulled when both Rochester/AS400 and POK/mainframe complained they couldn't meet the requirements).

Early Jan1992, had meeting with Oracle CEO where AWD/Hester told them that we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. However, by end of Jan, cluster scale-up was transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). 1993 Industry MIPS benchmarks, number of program iterations compared to reference platform (not actual instruction count):
ES/9000-982, 8-processor : 408MIPS, 51MIPS/processor RS6000/990 : 126MIPS (HA/CMP clusters would have been 16-way/2016MIPS, 128-system/16,128MIPS

Note the executive we reported to when doing HA/CMP, went over to head up Somerset (single chip 801/risc for AIM, Apple, IBM, Motorola; also involved integrating a lot of Motoorola 88K RISC features into Power/PC ... including SMP tightly-coupled multiprocessor support). This was basis with what Rocheter used for moving AS/400 to RISC (a decade or so after the original attempt).

Trivia: AWD PC/RT had PC/AT 16-bit bus and they had done their own 4mbit T/R (and other) cards. RS/6000 had microchannel and were told they couldn't do their own cards but had to use the (communication group's severely performance kneecapped) PS2 microchannel cards (example PS2 16mbit T/R microchannel card had low card-throughput than the PC/RT 4mbit T/R card ... joke was a RS/6000 server with microchannel 16mbit T/R card would have lower throughput than PC/RT with 4mbit T/R card).

refs:
https://en.wikipedia.org/wiki/IBM_RT_PC
https://en.wikipedia.org/wiki/IBM_RS/6000
https://en.wikipedia.org/wiki/AIM_alliance

Note about PC/RT ref being used for NSFNET: Early 80s, we had HSDT project (T1 and faster computer links) and working with NSF Director and was suppose to get $20M to interconnect the NSF Supercomputer Centers, then congress cuts the budget, some other things happened and eventually a RFP was release (in part based on what we already had running), From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
https://www.garlic.com/~lynn/2018d.html#33

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.

The RFP called for T1 network, but the PC/RT links were 440kbits/sec (not T1) and they put in T1 trunks with telco multiplexers (carrying multiple 440kbit links) to call it a T1 network. I periodically ridiculed that why don't they call it T5 network, since it was possible that some of the T1 trunks were in turn, carried over T5 trunks.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic survivability posts
https://www.garlic.com/~lynn/submain.html#available
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

posts mentioning Somerset, AIM, Apple, Motorola, power/pc, as/400
https://www.garlic.com/~lynn/2024c.html#1 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#93 PC370
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2021k.html#133 IBM Clone Controllers
https://www.garlic.com/~lynn/2021d.html#47 Cloud Computing
https://www.garlic.com/~lynn/2019c.html#2 S/38, AS/400
https://www.garlic.com/~lynn/2013f.html#29 Delay between idea and implementation
https://www.garlic.com/~lynn/2013b.html#3 New HD
https://www.garlic.com/~lynn/2012d.html#23 IBM cuts more than 1,000 U.S. Workers
https://www.garlic.com/~lynn/2012c.html#60 Memory versus processor speed
https://www.garlic.com/~lynn/2011p.html#75 Has anyone successfully migrated off mainframes?

--
virtualization experience starting Jan1968, online at home since Mar1970

Mid-Range Market

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mid-Range Market
Date: 17 Jun, 2024
Blog: Facebook
4300 sold into the same mid-range market as DEC VAX and in about same numbers for small unit orders, the big difference was large corporations with orders of hundreds of VM/4341s at a time for distribution out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Archived posts with decade of VAX sales, sliced and diced by model, year, US/non-US (MVI&MVII were micro-vax)
https://www.garlic.com/~lynn/2002f.html#0
https://www.garlic.com/~lynn/2024c.html#29

IBM 4361/4381 (follow-on to 4331/4341) was expected to see the same explosion in sales, but can be seen by the VAX numbers, by the mid-80s, the mid-range market was starting to shift to workstations and large PCs.

AS/400 wasn't released until AUG1988
https://en.wikipedia.org/wiki/IBM_AS/400

Other trivia, I had access to early engineering 4341 and Jan1979 was con'ed into doing benchmark for national lab that was looking at getting 70 for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami)

Posts mentioning mid-range distributed & cluster supercomputing tsunamis
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#23 HA/CMP
https://www.garlic.com/~lynn/2024.html#51 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022f.html#92 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#18 IBM Left Behind
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021h.html#107 3277 graphics
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#24 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#53 Amdahl Computers
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2019d.html#107 IBM HONE
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019c.html#42 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018e.html#92 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2018b.html#104 AW: mainframe distribution
https://www.garlic.com/~lynn/2018.html#24 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017i.html#62 64 bit addressing into the future
https://www.garlic.com/~lynn/2016h.html#48 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2015g.html#4 3380 was actually FBA?
https://www.garlic.com/~lynn/2014m.html#173 IBM Continues To Crumble
https://www.garlic.com/~lynn/2014m.html#57 Why you need batch cloud computing

--
virtualization experience starting Jan1968, online at home since Mar1970

REXX and DUMPRX

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: REXX and DUMPRX
Date: 18 Jun, 2024
Blog: Facebook
In the very early 80s, I wanted to demonstrate REX(X) was not just another pretty scripting language (before renamed REXX and released to customers). I decided on redoing a large assembler application (dump processor & fault analysis) in REX with ten times the function and ten times the performance (lot of hacks and slight of hand done to make interpreted REX run faster than the assembler version), working half time over three months elapsed. I finished early so started writing automated script that searched for most common failure signatures. It also included a pseudo dis-assembler ... converting storage areas into instruction sequences and would format storage according to specified dsects. I got softcopy of messages&codes and could index applicable information. I had thought that it would be released to customers, but for what ever reasons it wasn't (even tho it was in use by most PSRs and internal datacenters) ... however, I finally did get permission to give talks on the implementation at user group meetings ... and within a few months similar implementations started showing up at customer shops. Old archived email from 3092 group (3090 service processor, pair of 4361s running highly modified version of VM370R6 with service screens done in CMS IOS3270) contacted me about shipping it as part of 3092
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

trivia: (recently gone 404, but lives on at wayback machine)
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

mentions requiring pair of 3370 FBA drives, even MVS accounts which never had FBA support. Late 70s, I offered MVS group FBA support, but they said that even if fully integrated and tested, I needed $26M incremental revenue business case (couple hundred million in sales) to cover cost of education and pubs ... but since IBM was already selling every disk it could manufacture, FBA MVS sales would just translate into same amount of disks ... and I couldn't use lifetime savings as part of business case (note no CKD DASD have been manufactured for decades, all being simulated on industry standard fixed-block disks, aka 3380s were already fixed-block ... can be seen in formulas for records/track where record size has to be rounded up to fixed cell size).

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

posts mentioning MVS FBA $26M business case:
https://www.garlic.com/~lynn/2024b.html#110 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023f.html#68 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#32 3081 TCMs
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2021b.html#78 CKD Disks
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#34 The rise and fall of IBM
https://www.garlic.com/~lynn/2018e.html#22 Manned Orbiting Laboratory Declassified: Inside a US Military Space Station
https://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2016c.html#12 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015f.html#86 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
https://www.garlic.com/~lynn/2014b.html#18 Quixotically on-topic post, still on topic
https://www.garlic.com/~lynn/2014.html#94 Santa has a Mainframe!
https://www.garlic.com/~lynn/2013n.html#54 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2013i.html#2 IBM commitment to academia
https://www.garlic.com/~lynn/2013g.html#23 Old data storage or data base
https://www.garlic.com/~lynn/2013f.html#80 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013d.html#2 Query for Destination z article -- mainframes back to the future
https://www.garlic.com/~lynn/2013c.html#68 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#40 Searching for storage (DASD) alternatives
https://www.garlic.com/~lynn/2012p.html#32 Search Google, 1960:s-style
https://www.garlic.com/~lynn/2012o.html#58 ISO documentation of IBM 3375, 3380 and 3390 track format
https://www.garlic.com/~lynn/2011j.html#57 Graph of total world disk space over time?
https://www.garlic.com/~lynn/2011e.html#44 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011e.html#35 junking CKD; was "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2011b.html#47 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011.html#23 zLinux OR Linux on zEnterprise Blade Extension???
https://www.garlic.com/~lynn/2010o.html#12 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010n.html#65 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010n.html#14 Mainframe Slang terms
https://www.garlic.com/~lynn/2010k.html#10 Documenting the underlying FBA design of 3375, 3380 and 3390?
https://www.garlic.com/~lynn/2010f.html#18 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2009j.html#73 DCSS ... when shared segments were implemented in VM
https://www.garlic.com/~lynn/2008o.html#55 Virtual
https://www.garlic.com/~lynn/2008j.html#49 Another difference between platforms
https://www.garlic.com/~lynn/2006f.html#4 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
https://www.garlic.com/~lynn/2005m.html#40 capacity of largest drive
https://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc

--
virtualization experience starting Jan1968, online at home since Mar1970

Private Equity Becomes Roach Motel as Public Pension Funds and Other Investors Borrow As Funds Remain Tied Up

From: Lynn Wheeler <lynn@garlic.com>
Subject: Private Equity Becomes Roach Motel as Public Pension Funds and Other Investors Borrow As Funds Remain Tied Up
Date: 18 Jun, 2024
Blog: Facebook
Private Equity Becomes Roach Motel as Public Pension Funds and Other Investors Borrow As Funds Remain Tied Up
https://www.nakedcapitalism.com/2024/06/private-equity-becomes-roach-motel-as-public-pension-funds-and-other-investors-borrow-as-funds-remain-tied-up.html

trivia: the industry got such a bad reputation during the S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis

that they renamed the industry private equity and "junk bonds" became "high yield bonds". I've seen TV interviews where the host kept saying "junk bonds" and the guest kept say "high yield bonds" (when talking about the same thing).

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis

--
virtualization experience starting Jan1968, online at home since Mar1970

Mid-Range Market

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mid-Range Market
Date: 18 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#13 MVS/ISPF Editor
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024d.html#16 REXX and DUMPRX

also note: Future System was completely different and was going to completely replace 370 (internal politics was killing off 370 efforts and lack of new 370 is credited with giving the clone 370 makers their market foothold), then in the wake of the Future System implosion, there was a mad rush to get stuff back into the 370 product pipeline, including kicking off quick&dirty 3033&3081 in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

the head of POK also manages to convince corporate to kill the vm370/cms product, shutdown the development group, and move all the people to POK for MVS/XA; they weren't planning on telling the people about the move until the very last moment (to minimize those that might be able to escape into the Boston area). The information managed to leak early and several managed to escape, including to the brand new, infant DEC VAX effort (joke was that head of POK was major contributor to DEC VAX). They then had a witch hunt for the source of the leak, fortunately for me, nobody gave up the leak source. Endicott eventually manages to save the VM370/CMS product mission (for the mid-range), but had to recreate development organization from scratch (and claims in early 80s upwards of 200 people in the ISPF organization).

as aside, one of the final nails in the FS coffin was analysis by the IBM Houston Science Center that if 370/195 applications were migrated to FS machine made out of the fastest available hardware, it would have throughput of 370/145 (something like architecture having factor of 30times slowdown).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

posts mentioning after future system implodes, head of POK convinces corporate to kill vm370 product and move all the people to POK
https://www.garlic.com/~lynn/2024c.html#87 Gordon Bell
https://www.garlic.com/~lynn/2023f.html#75 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023e.html#74 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#22 Copyright Software
https://www.garlic.com/~lynn/2023d.html#105 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023b.html#64 Another 4341 thread
https://www.garlic.com/~lynn/2023b.html#11 Open Software Foundation
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2022h.html#39 IBM Teddy Bear
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#86 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2021c.html#64 CMS Support
https://www.garlic.com/~lynn/2021c.html#12 Z/VM
https://www.garlic.com/~lynn/2021.html#53 Amdahl Computers
https://www.garlic.com/~lynn/2019b.html#77 IBM downturn
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#5 DOS & OS2
https://www.garlic.com/~lynn/2017c.html#81 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#80 Great mainframe history(?)
https://www.garlic.com/~lynn/2015.html#87 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2014m.html#25 Microsoft Open Sources .NET, Saying It Will Run on Linux and Mac
https://www.garlic.com/~lynn/2014f.html#22 Complete 360 and 370 systems found
https://www.garlic.com/~lynn/2014e.html#52 Rather nice article on COBOL on Vulture Central
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2014.html#4 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013n.html#94 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2012p.html#53 What is holding back cloud adoption?
https://www.garlic.com/~lynn/2012k.html#33 Using NOTE and POINT simulation macros on CMS?
https://www.garlic.com/~lynn/2012f.html#39 SIE - CompArch
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2012d.html#65 FAA 9020 - S/360-65 or S/360-67?
https://www.garlic.com/~lynn/2011o.html#6 John R. Opel, RIP
https://www.garlic.com/~lynn/2011j.html#42 assembler help!
https://www.garlic.com/~lynn/2011g.html#8 Is the magic and romance killed by Windows (and Linux)?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Internal Network

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Internal Network
Date: 18 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#78 IBM Internal Network

co-worker at the science center was responsible for the science center wide-area network ... one of the people that invented GML at the science center in 1969, saying their original job was to promote the CP67 wide-area network:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Science Center wide-area network morphs into the internal corporate network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s ... technology was also used for the corporate sponsored univ BITNET&EARN.
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

At the 1jan1983 cut-over from HOST/IMP to internetworking, there was 100 IMPs and 255 hosts while the internal corporate network was rapidly approaching 1000; old post with list of corporate locations that added one or more hosts during 1983:
https://www.garlic.com/~lynn/2006k.html#8

Edson wiki entry
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

We had transferred out to SJR in 1977 and fall of 1982, SJR had the first IBM connection to CSNET
https://en.wikipedia.org/wiki/CSNET
some old archived email
https://www.garlic.com/~lynn/98.html#email821022
https://www.garlic.com/~lynn/2002p.html#email821122
CSNET (arpanet->intenet cutover) status email
https://www.garlic.com/~lynn/2002p.html#email830109
https://www.garlic.com/~lynn/2000e.html#email830202

Early 1980s, had HSDT project with T1 and faster computer links (both satellite and terrestrial) and one of 1st links was satellite T1 link between IBM Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston, that had a whole boatload of Floating Point Systems boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems
some of which had 40mbyte/sec disk arrays.

Was also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers ... then congress cut the budget, some other things happened and eventually a RFP was released (in part based on what we already had running). Along the way, I was asked to talk to UCB, NSF thought they were giving UC grant for UCB supercomputer center, but folklore is that the regent's master building plan was UCSD would get the next new bldg, and it became the UCSD supercomputer center instead.

Working with the IBM UCB account team, in 1983 I was also asked if I would talk to the Berkeley "10M telescope" people and had a number of meetings with them and visit/tours of some testing being done at Lick Observatory (east of San Jose). The 10M effort was also working on transition from film to CCD ... and the plans were to put it on a mountain in Hawaii and wanted to do remote observing from the mainland. CCDs were still fairly primitive ... but starting to get better, in any case it looked like remote viewing would start out requiring around 800kbits/sec. Along the way, they got grants from the Keck Foundation ... and it morphs into the Keck Observatory.
https://en.wikipedia.org/wiki/W._M._Keck_Observatory
https://www.keckobservatory.org/

some archived (alt.folklore.computer) posts with old 10m email
https://www.garlic.com/~lynn/2007c.html#email830803b
https://www.garlic.com/~lynn/2007c.html#email830804c
https://www.garlic.com/~lynn/2004h.html#email830804
https://www.garlic.com/~lynn/2004h.html#email830822
https://www.garlic.com/~lynn/2004h.html#email830830
https://www.garlic.com/~lynn/2004h.html#email841121
https://www.garlic.com/~lynn/2004h.html#email860519

other trivia: 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid. The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87)

Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET/EARN posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

NAS Hitachi 370 Clones

From: Lynn Wheeler <lynn@garlic.com>
Subject: NAS Hitachi 370 Clones
Date: 19 Jun, 2024
Blog: Facebook
Archived post with email about former IBMer (had doneone of the VM370 Performance Products) that was doing consulting ... including for Lockheed's DIALOG ... mentions DIALOG had got a NAS AS9000
https://www.garlic.com/~lynn/2006b.html#email810318
https://www.garlic.com/~lynn/2006b.html#email810421
https://www.garlic.com/~lynn/2009q.html#email810422

I would periodically drop by DIALOG to see him when he was in town. He tried to interest me in leaving IBM and joining DIALOG or NAS

past posts mentioning DIALOG
https://www.garlic.com/~lynn/2022g.html#86 Anyone knew or used the Dialog service back in the 80's?
https://www.garlic.com/~lynn/2022g.html#73 Anyone knew or used the Dialog service back in the 80's?
https://www.garlic.com/~lynn/2017i.html#4 EasyLink email ad
https://www.garlic.com/~lynn/2017.html#20 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2014i.html#90 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014e.html#39 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2011j.html#47 Graph of total world disk space over time?
https://www.garlic.com/~lynn/2009q.html#44 Old datasearches
https://www.garlic.com/~lynn/2009q.html#24 Old datasearches
https://www.garlic.com/~lynn/2009m.html#88 Continous Systems Modelling Package
https://www.garlic.com/~lynn/2007k.html#60 3350 failures
https://www.garlic.com/~lynn/99.html#150 Q: S/390 on PowerPC?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CSC and MIT MULTICS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CSC and MIT MULTICS
Date: 19 Jun, 2024
Blog: Facebook
There was friendly rivalry between (IBM science center) 4th & (multics) 5th flrs ... one of their customers was USAFDC in the pentagon ...
https://www.multicians.org/sites.html
https://www.multicians.org/mga.html#AFDSC
https://www.multicians.org/site-afdsc.html

In spring 1979, some USAFDC wanted to come by to talk to me about getting 20 4341 VM370 systems. When they finally came by six months later, the planned order had grown to 210 4341 VM370 systems. Earlier in jan1979, I had also been con'ed into doing a 6600 benchmark on an internal engineering 4341 (processor clock not running full-speed; before production shipments to customers) for a national lab that was looking at getting 70 4341s for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami). The national lab benchmark had run 35.77sec on 6600 and 36.21secs on engineering 4341.

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Posts mention USAFDC
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#47 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#86 5th flr Multics & 4th flr science center
https://www.garlic.com/~lynn/2022g.html#8 IBM 4341
https://www.garlic.com/~lynn/2021c.html#48 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#24 IBM Recruiting
https://www.garlic.com/~lynn/2019c.html#42 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018e.html#92 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2017j.html#95 why VM, was thrashing
https://www.garlic.com/~lynn/2017c.html#53 Multics Timeline
https://www.garlic.com/~lynn/2014.html#33 Warnings for the U.S. military about innovation and the information age: The Pentagon looks like a minicomputer firm
https://www.garlic.com/~lynn/2014.html#28 The History of the Grid
https://www.garlic.com/~lynn/2014.html#26 Warnings for the U.S. military about innovation and the information age: The Pentagon looks like a minicomputer firm

--
virtualization experience starting Jan1968, online at home since Mar1970

Early Computer Use

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Early Computer Use
Date: 19 Jun, 2024
Blog: Facebook
Took two credit hr intro to fortran/computers, at the end of the semester, univ hires me to rewrite 1401 MPIO for 360/30 (doing reader->tape & tape->printer/punch frontend for 709). Univ was getting 360/67 for tss/360 to replace 709/1401 and got a 360/30 (had 1401 microcode emulation) temporarily replacing 1401. The univ. shutdown datacenter on weekends and I had it dedicated, although 48hrs w/o sleep made monday classes hard. They gave me a bunch of hardware & software manuals and I got to design and implement my own stand-alone monitor, device drivers, interrupt handlers, error recovery, storage management, etc; within a few weeks had 2000 card 360 assembler program.

Within a year of intro class, 360/67 arrived and univ. hires me fulltime responsible for os/360 (tss/360 never came to production) and I continued to have my dedicated weekend time. Student fortran ran under second on (tape->tape) 709, initially on os/360 (360/67 running as 360/65), ran over minute. I install HASP cuts time in half. I then redo STAGE2 SYSGEN, placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs; never got better than 709 until I install Univ. of Waterloo WATFOR.

Before I graduate I'm hired into small group in Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter possibly largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (somebody joked that Boeing was acquiring 360/65s like other companies acquired keypunches). Lots of politics between Renton Director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff).

When I graduate, I join science center (instead of staying with Boeing CFO) ... not long later IBM gets new CSO (had come from gov. service, at one time head of presidential detail) and I'm asked to spend time with him talking about computer security.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent posts mentioning working in Boeing CFO office:
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#77 Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived

posts mentioning asked to spend time with new IBM CSO talking about computer security:
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#87 FAA ATC, The Brawl in IBM 1964
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023.html#58 Almost IBM class student
https://www.garlic.com/~lynn/2022h.html#75 Researchers found security pitfalls in IBM's cloud infrastructure
https://www.garlic.com/~lynn/2022f.html#115 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022e.html#98 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022c.html#4 Industrial Espionage
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2022.html#45 Automated Benchmarking
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021j.html#37 IBM Confidential
https://www.garlic.com/~lynn/2021g.html#69 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021g.html#66 The Case Against SQL
https://www.garlic.com/~lynn/2021f.html#16 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021e.html#57 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#84 Bizarre Career Events
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2021.html#0 IBM "Wild Ducks"
https://www.garlic.com/~lynn/2020.html#37 Early mainframe security
https://www.garlic.com/~lynn/2019d.html#66 Facebook Knows More About You Than the CIA
https://www.garlic.com/~lynn/2019.html#67 Economic Mess
https://www.garlic.com/~lynn/2018d.html#0 The Road Not Taken: Edward Lansdale and the American Tragedy in Vietnam
https://www.garlic.com/~lynn/2018b.html#99 IBM 5100
https://www.garlic.com/~lynn/2017g.html#75 Running unsupported is dangerous was Re: AW: Re: LE strikes again
https://www.garlic.com/~lynn/2017f.html#88 IBM Story
https://www.garlic.com/~lynn/2017e.html#90 Ransomware on Mainframe application ?
https://www.garlic.com/~lynn/2017e.html#50 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2015e.html#28 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2013n.html#60 Bridgestone Sues IBM For $600 Million Over Allegedly 'Defective' System That Plunged The Company Into 'Chaos'
https://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2010q.html#53 Programmer Charged with thieft (maybe off topic)
https://www.garlic.com/~lynn/2010q.html#8 Plug Your Data Leaks from the inside
https://www.garlic.com/~lynn/2010q.html#3a The Great Cyberheist
https://www.garlic.com/~lynn/2010j.html#33 Personal use z/OS machines was Re: Multiprise 3k for personal Use?
https://www.garlic.com/~lynn/2010j.html#19 Personal use z/OS machines was Re: Multiprise 3k for personal Use?
https://www.garlic.com/~lynn/2009r.html#41 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009r.html#39 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009g.html#24 Top 10 Cybersecurity Threats for 2009, will they cause creation of highly-secure Corporate-wide Intranets?
https://www.garlic.com/~lynn/2005g.html#55 Security via hardware?
https://www.garlic.com/~lynn/aadsm27.htm#49 If your CSO lacks an MBA, fire one of you

--
virtualization experience starting Jan1968, online at home since Mar1970

Obscure Systems in my Past

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Obscure Systems in my Past
Newsgroups: alt.folklore.computers
Date: Thu, 20 Jun 2024 08:32:17 -1000
"Kurt Weiske" <kurt.weiske@realitycheckbbs.org.remove-8hr-this> writes:
Posting about the Pick system I worked on reminded me of another weird system I worked on.

801/RISC ROMP chip ran CP.r implemented in PL.8 ... was going to be used for displaywriter follow-on. When displaywriter follow-on got canceled (word processing moving to ibm/pc), they decided to pivot to the unix workstation market and got the company that did AT&T Unix port of IBM/PC for PC/IX, to do one of ROMP (PC/RT and AIX). The issue was what to do with the 200 PL.8 programmers.

They decideded to do VRM, a sort of virtual machine implementation in PL.8 and told the unix port company it would be faster & simpler if instead of porting to real ROMP ... it would be simpler&faster if they ported to psuedo virtual machine VRM interface instead (one downside for unix market, was that new device drivers required both unix/aix C driver and a VRM PL.8 driver).. Then to help justify the VRM, they also got Pick port to the psuedo virtual machine VRM interface (and could run concurrently)

Note IBM ACIS had a few people in the process of doing a UCB BSD port to (mainframe) 370 when they were told to instead port to PC/RT (bare machine w/o VRM), ... which took enormously less resources and enormously less time than either VRM or AIX.

posts mentioning 801/risc, iliad, romp, rios, pc/rt, rs/6000, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

ARM is sort of channeling the IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: ARM is sort of channeling the IBM 360
Newsgroups: comp.arch
Date: Thu, 20 Jun 2024 14:28:04 -1000
John Levine <johnl@taugh.com> writes:
It's not that close. S/360 had a single key in the PSW that it matched against all of a program's storage refrences while this has the tag in a pointer, so it's more like a capability.

The x86 protection keys are more like S/360. There's a key for each virtual page and a PKRU register that has to match.


360s, each 2kbytes had 4bit storage protect key .... match executing psw 4kbit key to storage protect key. zero in psw 4kbit key reserved for system and allowing access all storage ... non-zero allowing for (isolating) up to 15 separated concurrently executing (mvt) regions.

a little over decade ago was asked to track down decision to add virtual memory to all 370s. basically MVT storage management was so bad that required specifying region storage requirements four times larger than used ... limiting number of concurrently executing regions to less than number needed for keeping 1mbyte, 370/165 busy and justified. Going to single 16mbyte virtual memory (VS2/SVS) allowed increasing number of concurrent regions by factor of four (up to 15) with litte or no paging (sort of like running MVT in a CP67 16mbyte virtual machine). Biggest bit of code was creating a copy of passed channel (I/O) programs, substituting real addresses for virtual addresses (Ludlow borrows "CCWTRANS" from CP67, crafting into MVT EXCP/SVC0).

trivia: 370/165 engineers started complaining they if they had to implement the full 370 virtual memory architecture, it would slip announce by six months ... so several features were dropped (including virtual memory segment table entry r/o flag, could have combination of different virtual address spaces sharing the same segment, some being r/w and some being r/o). Note: other models (& software) that implmeneted full architecture, had to drop back to 370/165 subset.

370s were getting larger fast and increasingly needed more than 15 concurrently executing regions (to keep systems busy and justified) and so transition to VS2/MVS, a different virtual address space for each region (isolating each region storage access in different virtual address space). However, it inherited os/360 pointer-passing APIs and so mapped an image of the "MVS" kernel image into eight mbytes of every virtual address space (leaving eight for application). Also "subsystems" were mapped into separate address spaces and (pointer passing API) needed to access application storage. Initially a common 1mbyte segment storage area was mapped into all address spaces (common segment area/"CSA"). However space requirements was somewhat proportional to number of subsystems and concurrently executing application and "CSA" quickly becomes "common system area").

By 3033 time-frame CSA was frequently 5-6mbytes ... leaving 2-3mbytes for application regions (and threatening to becoming 8mbytes, leaving zero). This was part of mad rush to xa/370 ... special architecture features for MVS, including subsystems able to concurrently access multiple address spaces (a subset was eventually retrofitted to 3033 as "dual-address space mode").

other trivia: in 70s, I was pontificating that there was increasing mismatch between disk throughput and system throughput. In early 80s I wrote a tome about relative system disk throughput had declined by an order of magnitude since os/360 announce (systems got 40-50 times faster, disks only got 3-5 times faster). Some disk executive took exception and assigned the division system performance group to refute the claim. After a couple weeks they came back and effectively said I had slightly understated the case. Their analysis was then turned into (mainframe user group) SHARE
https://en.wikipedia.org/wiki/SHARE_Operating_System
presentation about configuring disks for better system throughput (16Aug1984, SHARE 63, B874).

posts mentioning DASD, CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

some recent posts mentioning SHARE B874
https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS

some other recent posts mentioning (3033) dual-address space mode
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024.html#1 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023d.html#108 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23June1969 Unbundling Announcement

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23June1969 Unbundling Announcement
Date: 22 Jun, 2024
Blog: Facebook
from original post
https://peterskastner.wordpress.com/2024/06/21/ibm-unbundling-decision-birth-of-an-independent-software-industry/

from comment in post in another FACEBOOK group

23Jun1969 unbundling started to charge for (application) software (however IBM manages to make the case that kernel software should still be free), SE (software/support engineer) services, and maintenance.

SE training used to include sort of apprentice program as part of large group on-site at customer installation, with unbundling they couldn't figure out how NOT to charge for on-site trainee SE time ... kicking off the HONE (hands-on network experience) ... branch office online access to (virtual machine) CP/67 systems practicing with guest operating systems. The science center had originally wanted a 360/50 to add virtual memory hardware support, but all the spare 360/50s were going to the FAA ATC program and so had to settle for 360/40, doing "CP40" & "CMS". When 360/67 standard with virtual memory becomes available, CP40/CMS morphs into CP67/CMS.

I had taken two credit hour intro to Fortran/Computers and at the end of the semester, the univ hires me to rewrite 1401 MPIO for 360/30. The univ. was getting a 360/67 for TSS/360, replacing 709/1401 and was temporarily getting 360/30 replacing 1401 ... as part of getting 360 experience. The univ shutdown datacenter on weekends and I would have the place dedicated, although 48hrs w/o sleep made monday classes hard. I was given a bunch of hardware and software manuals and got to design and implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. and within a few weeks had a 2000 card assembler program. The 360/67 arrives within a year of taking intro class and I was hired fulltime responsible for OS/360 (TSS/360 never really came to production and so 360/67 ran fulltime as 360/65).

Then CSC came out to install CP67 (3rd installation after CSC itself and MIT Lincoln Labs) and I got to play with it mostly during my weekend dedicated time ... getting to redesign and rewrite a lot of the code. Before I graduate, I was hired fulltime into small group in the Boeing CFO office to help with formation with Boeing Computer Services, consolidating all dataprocessing into independent business unit. I thought Renton was possibly largest datacenter in the world, 360/65s arriving faster than they could be installed (boxes constantly consolidated in hallways around machine room). Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I wasn't doing other stuff. Eventually I graduate and join the science center (instead of staying with Boeing CFO).

At the science center, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was a long time customer. CSC ports APL\360 to CMS for CMS\APL and HONE starts offering CMS\APL-based branch sales&marketing support applications which come to dominate all HONE activity (and original SE guest operating system just dwindles away).

During early 70s, IBM has the "Future System" effort which was completely different than 360/370 and was going to completely replace it. During FS, internal politics was killing off 370 efforts (lack of new 370 is credited with giving the clone 370 makers their market foothol). When FS finally implodes there is mad rush to get stuff back into 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel.

Possibly because of the rise of the 370 clone makers, IBM also decides to start charging for kernel software, starting with new kernel add-ons (eventually transitioning to charging for all kernel software and no longer providing software source) and a bunch of my internal stuff was selected as guinea pig (I get to spend a lot of time with business planners and lawyers on kernel software charging practices). By early 80s, full kernel software was being charged for and the start of the OCO-wars (object code only) with customers.

random other thoughts:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

unbundling announce posts
https://www.garlic.com/~lynn/submain.html#unbundle
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23June1969 Unbundling Announcement

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23June1969 Unbundling Announcement
Date: 22 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement

recently comment to REXX post:
https://www.garlic.com/~lynn/2024d.html#16 REXX and DUMPRX

In the very early 80s, I wanted to demonstrate REX(X) was not just another pretty scripting language (before renamed REXX and released to customers). I decided on redoing a large assembler application (dump processor & fault analysis) in REX with ten times the function and ten times the performance (lot of hacks and slight of hand done to make interpreted REX run faster than the assembler version), working half time over three months elapsed. I finished early so started writing automated script that searched for most common failure signatures. It also included a pseudo dis-assembler ... converting storage areas into instruction sequences and would also format storage according to specified dsects. I got softcopy of messages&codes and could index applicable information.

I had thought that it would be released to customers, but for what ever reasons it wasn't (even tho it was in use by most PSRs and internal datacenters) ... however, I finally did get permission to give talks on the implementation at user group meetings ... and within a few months similar implementations started showing up at customer shops. Old archived email from 3092 group (3090 service processor, pair of 4361s running highly modified version of VM370R6 with service screens done in CMS IOS3270) contacted me about shipping it as part of 3092
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

trivia: (recently gone 404, but lives on at wayback machine)
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
mentions requiring pair of 3370 FBA drives, even MVS accounts which never had FBA support. Late 70s, I offered MVS group FBA support, but they said that even if fully integrated and tested, I needed $26M incremental revenue business case (couple hundred million in sales) to cover cost of education and pubs ... but since IBM was already selling every disk it could manufacture, FBA would just translate into same amount of disks ... and I couldn't use lifetime savings as part of business case (note no CKD DASD have been manufactured for decades, all being simulated on industry standard fixed-block disks).

other trivia: while at Boeing I modified CP67 to support "pageable kernel" for lower use features, in some cases splitting large routines into four 4kbyte page "chunks". This drove up the number of kernel entry points to over 255. Turns out CP67 text decks were placed behind a "BPS Loader" which only supported 255 ESD entries ... also discovered that BPS did pass the address of its ESD table to routine (CPINIT which wrote an image copy to disk for IPL from disk) ... I moved a copy of the ESD table to the end of the pageable kernel ... which CPINIT would also write to disk (I was faced with constant kludge to keep ESD entries within 255 limit). Also modified CPINIT when IPLed from disk, pre-allocating failure/dump file, would also write a copy of the ESD table to the file (a complete copy of all ESD entries was available while running as well as in any DUMP file for analysis).

After leaving Boeing for science center ... I did find a source copy of the BPS loader in a card cabinet up in 545 tech sq attic ... which I modified for more than 255 ESD and also new type control card that would force a 4k-byte boundary.

DUMPRX posts
https://www.garlic.com/~lynn/submain.html#dumprx
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

a few posts mentioning 709/1401, mpio, 360/67, os/360, cp/67, boeing cfo, renton datacenter, boeing field
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#83 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?

--
virtualization experience starting Jan1968, online at home since Mar1970

STL Channel Extender

From: Lynn Wheeler <lynn@garlic.com>
Subject: STL Channel Extender
Date: 22 Jun, 2024
Blog: Facebook
1980 IBM STL (since renamed SVL) was bursting at the seams and they were moving 300 people from IMS group to offsite bldg with service back to STL datacenter. They had tried "remote 3270", but found the human factors unacceptable. I get con'ed into implementing channel extender support for NSC HYPERChannel (A220, A710/A715/A720, A510/A515) ... allowing channel attached 3270 controllers to be located at the offsite bldg, connected to mainframes back in STL datacenter ... with no perceived difference in human factors (quarter second or better trivial response).
https://en.wikipedia.org/wiki/Network_Systems_Corporation
https://en.wikipedia.org/wiki/HYPERchannel

STL had spread 3270 controller boxes across all the channels with 3830 disk controller boxes. Turns out the A220 mainframe channel-attach boxes (used for channel extender) had significantly lower channel busy for the same amount of 3270 terminal traffic (as 3270 channel-attach controllers) and as a result the throughput for IMS group 168s (with NSC A220s) increased by 10-15% ... and STL considered using NSC HYPERChannel A220 channel-extender configuration, for all 3270 controllers (even those within STL). NSC tried to get IBM to release my support, but a group in POK playing with some fiber stuff got it vetoed (concerned that if it was in the market, it would make it harder to release their stuff).

trivia: The vendor eventually duplicated my support and then the 3090 Product Administrator tracked me down. He said that 3090 channels were designed to have a total 3-5 channel errors (EREP reported) for all systems&customers over a year period and there were instead 20 (extra turned out to be channel-extender support). When I got a unrecoverable telco transmission error, I would reflect a CSW "channel-check" to the host software. I did some research and found that if an IFCC (interface control check) was reflected instead, it basically resulted in the same system recovery activity (and got vendor to change their software from "CC" to "IFCC").

About the same time, the IBM communication group was fighting off the release of mainframe TCP/IP ... and when that got reversed, they changed their tactic and claimed that since they had corporate ownership of everything that crossed datacenter walls, TCP/IP had to be released through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
https://datatracker.ietf.org/doc/html/rfc1044

other trivia: 1988, the IBM branch office asks me if I could help LLNL (national lab) "standardize" some fiber stuff they were playing with, which quickly becomes FCS (fibre-channel standard, including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK "fiber" group gets their stuff released in the 90s with ES/9000 as ESCON, when it was already obsolete, 17mbytes/sec. Then some POK engineers get involved with FCS and define a heavy-weight protocol that drastically cuts the native throughput which eventually ships as FICON. Most recent public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON (over 104 FCS). About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Note also, IBM documents keeping SAPs (system assist processors that do I/O) to 70% CPU (which would be more like 1.5M IOPS).

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
rfc 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
FICON &/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some posts mentioning 3090 channel-extender channel "errors"
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2019c.html#16 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2016h.html#53 Why Can't You Buy z Mainframe Services from Amazon Cloud Services?
https://www.garlic.com/~lynn/2012e.html#54 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2008g.html#10 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things

--
virtualization experience starting Jan1968, online at home since Mar1970

Do We Need Language to Think?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Do We Need Language to Think?
Date: 22 Jun, 2024
Blog: Facebook
Do We Need Language to Think? A group of neuroscientists argue that our words are primarily for communicating, not for reasoning.
https://www.nytimes.com/2024/06/19/science/brain-language-thought.html

Late 70s & early 80s, I was blamed for online computer conferencing on the internal corporate network (larger than arpanet/internet from just about beginning until sometime mid/late 80s) ... it really took off spring 1981 when I distributed trip report of visit to Jim Gray at Tandem. Folklore is 5of6 of the corporate executive committee wanted to fire me. One of the outcomes is researcher was paid to sit in the back of my office for nine months, taking notes on how I communicated, face-to-face, telephone, got copies of all my incoming and outgoing email and logs of all instant messages. Results was books, papers, conference talks and Stanford Phd joint between language and computer AI (Winograd was advisor on AI side).

Researcher was ESL teacher in prior life and claimed I have characteristics english as 2nd language ... but I have no other natural language ... so there was some conjecture that I don't have a "native" natural language.

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal corporate network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some posts mentioning ESL characteristics but no native natural language
https://www.garlic.com/~lynn/2022h.html#90 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022f.html#88 Foreign Language
https://www.garlic.com/~lynn/2022c.html#102 IBM Bookmaster, GML, SGML, HTML
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2018e.html#18 Online Computer Conferencing
https://www.garlic.com/~lynn/2018c.html#73 Army researchers find the best cyber teams are antisocial cyber teams
https://www.garlic.com/~lynn/2017g.html#94 AI Is Inventing Languages Humans Can't Understand. Should We Stop It?
https://www.garlic.com/~lynn/2016f.html#84 We Use Words to Talk. Why Do We Need Them to Think?
https://www.garlic.com/~lynn/2016d.html#8 What Does School Really Teach Children
https://www.garlic.com/~lynn/2016.html#49 Strategy
https://www.garlic.com/~lynn/2015b.html#66 fingerspitzengefuhl and Coup d'oeil
https://www.garlic.com/~lynn/2012f.html#51 Thinking in a Foreign Language
https://www.garlic.com/~lynn/2010o.html#60 Compressing the OODA-Loop - Removing the D (and maybe even an O)

--
virtualization experience starting Jan1968, online at home since Mar1970

Future System and S/38

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Future System and S/38
Date: 23 Jun, 2024
Blog: Facebook
Folklore is that with Future System imploding, some of the people retreated to Rochester and did a simplified version resulting in S/38. FS was completely different from 360/370 and was going to completely replace it. One of the last nails in the FS "coffin" was analysis by the IBM Houston Science Center that if 370/195 apps were moved to FS machine made out of the fastest available hardware, it would have throughput of 370/145 (about 30 times slowdown, aka available hardware could meet throughput requirements of low-end s/38 market)
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

One of the S/38 issues is it treated all disks as single large filesystem (single file might have scatter allocation with pieces on multiple disks) ... backup had to do whole backup as single large unit (which could take many hrs while system was down) and any simple single disk failures, required restoring the complete filesystem backup (replace failed disk and then do complete filesystem restore, large mainframe with 300 disks would have been down for weeks for any backup or restore) ... which could take day or two.
https://en.wikipedia.org/wiki/IBM_System/38
Single disk failures were so traumatic that S/38 was early adopter of redundant disks (IBM 1977 patent)
https://en.wikipedia.org/wiki/RAID#History
this mentions microcode and 23jun1969 unbundling announcement
https://en.wikipedia.org/wiki/IBM_System/38#Microcode

recent longer tome on unbundling, charging for (application) software, keeping "kernel software" free and later decision to start charging for kernel software
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement

I continued to work on 360/370 all during FS period and would periodically ridicule FS activity (which wasn't very career enhancing activity) ... including their single-level-store which somewhat was carry over from TSS/360. I had done a page-mapped filesystem for CMS and would claim that I learned what not to do from TSS/360 ... however part of the FS failure gave single-level-store implementations a bad reputation ... and I couldn't get my CMS version approved for part of the product (even though could demonstrate at least three times the throughput of standard CMS filesystem).

Note little over decade ago, I was asked to track down decision to add virtual memory to all 370s; basically MVT storage management was so bad that region sizes had to be specified four times later than used, as result typical 1mbyte 370/165 would only run four concurrent regions, insufficient to keep 165 busy and justified; going to 16mbyte virtual address space, aka VS2/SVS, increase number of concurrent running regions by factor of four with little or no paging (sort of like running MVT in a CP67 16mbyte virtual machine). Simpson (of HASP fame) had done MFT-based virtual memory operating system with page-mapped filesystem "RASP" ... and he couldn't make any headway ... and left IBM for Amdahl where he redid "RASP" from scratch. Post with some of email exchange about virtual memory for all 370s
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
posts mentioning getting to play disk engineer in bldg14&15 across the street in 2nd half 70s
https://www.garlic.com/~lynn/subtopic.html#disk
cms page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

Future System and S/38

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Future System and S/38
Date: 23 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#29 Future System and S/38

CEO Learson tried (and failed) to block bureaucrats, careerists, MBAs from destroying Watson culture & legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

FS (failing) significantly accelerated the rise of the bureaucrats, careerists, and MBAs .... From Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive

... snip ...

note: since FS was going to replace 360/370, internal politics was killing off 370 efforts (and the lack of new IBM 370s during the period is credited with given the clone 370 system makers their market foothold). claim is a major motivation for FS was as a complex countermeasure to clone compatible 360/370 I/O controllers ... but it resulted in giving a rise to the clone 370 system makers)

late 80s, senior disk engineer gets a talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance ... but opens the talk that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing drop in disk sales with data fleeing datacenters to more distributed computing friendly platforms. The communication group had a stranglehold on datacenters with their corporate strategic responsibility for everything that crossed datacenter walls and were fiercely fighting off client/server and distributed computing. The disk division had come up with a number of (distributed computing) solutions that were constantly vetoed by the communication group.

Communication group datacenter stranglehold wasn't just disks, a couple years later IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

... other trivia: before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on IBM cp67/cms at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

... and opel
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

The communication group had later performance kneecapped the PS2 microchannel cards. The AWD workstation division had done their own cards for the PC/RT (AT-bus) including the 4mbit token-ring card. However, for RS/6000 microchannel, AWD was told they couldn't do their own cards, but had to use the PS2 microchannel cards. An example of severe performance kneecap was that the PS2 microchannel 16mbit T/R card had lower card throughput than the PC/RT 4mbit T/R card (joke was a RS/6000 16mbit T/R server would have lower throughput than PC/RT 4mbit T/R server).

Note: 4300s sold into the same mid-range market as DEC VAX and in about same numbers (for small unit orders). Big difference was large corporation orders for hundreds of 4300s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Old post with a decade of DEC VAX sales, sliced and diced by model, year, US/non-US. As can be seen by mid-80s VAX numbers, mid-range market was starting to shift to workstations and large PC servers (MVI/MVII are microvax workstations).
https://www.garlic.com/~lynn/2002f.html#0

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
communication group datacenter stranglehold posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX President posts
https://www.garlic.com/~lynn/submisc.html#gerstner

--
virtualization experience starting Jan1968, online at home since Mar1970

Future System and S/38

From: Lynn Wheeler <lynn@garlic.com>
Subject: Future System and S/38
Date: 23 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#29 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#30 Future System and S/38

Mainframe emulators (running on PC, linux, windows, max os x)
http://www.hercules-390.org/
https://en.wikipedia.org/wiki/Hercules_(emulator)
https://bradrigg456.medium.com/run-your-own-mainframe-using-hercules-mainframe-emulator-and-mvs-3-8j-tk4-e8a85ebecd62

Free and copyright "free" IBM software (through the 70s) available.
http://www.ibiblio.org/jmaynard/

Early 1979, copyright law changed & extended life, I scanned my copy of SHARE LSRAD report (copyright Dec79), in late 2011 and had to find somebody at SHARE to approve putting it up on bitsavers:
http://bitsavers.org/pdf/ibm/share/

some posts mentioning hercules and funsoft mainframe emulators
https://www.garlic.com/~lynn/2023d.html#34 IBM Mainframe Emulation
https://www.garlic.com/~lynn/2021i.html#31 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2011c.html#93 Irrational desire to author fundamental interfaces
https://www.garlic.com/~lynn/2010e.html#42 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2009c.html#21 IBM tried to kill VM?
https://www.garlic.com/~lynn/2005k.html#8 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes

posts mentioning bitsaver and SHARE LSRAD
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#20 Copyright Software
https://www.garlic.com/~lynn/2022.html#122 SHARE LSRAD Report
https://www.garlic.com/~lynn/2015f.html#82 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#53 Amdahl UTS manual
https://www.garlic.com/~lynn/2013h.html#85 Before the PC: IBM invents virtualisation
https://www.garlic.com/~lynn/2013h.html#82 Vintage IBM Manuals
https://www.garlic.com/~lynn/2013e.html#52 32760?
https://www.garlic.com/~lynn/2012p.html#58 What is holding back cloud adoption?
https://www.garlic.com/~lynn/2012o.html#35 Regarding Time Sharing
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking
https://www.garlic.com/~lynn/2012f.html#58 Making the Mainframe more Accessible - What is Your Vision?
https://www.garlic.com/~lynn/2011p.html#146 IBM Manuals
https://www.garlic.com/~lynn/2011p.html#11 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011p.html#10 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011n.html#62 1979 SHARE LSRAD Report
https://www.garlic.com/~lynn/2011.html#88 digitize old hardcopy manuals
https://www.garlic.com/~lynn/2011.html#85 Two terrific writers .. are going to write a book
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010l.html#13 Old EMAIL Index
https://www.garlic.com/~lynn/2009n.html#0 Wanted: SHARE Volume I proceedings
https://www.garlic.com/~lynn/2009.html#70 A New Role for Old Geeks
https://www.garlic.com/~lynn/2009.html#47 repeat after me: RAID != backup

--
virtualization experience starting Jan1968, online at home since Mar1970

ancient OS history, ARM is sort of channeling the IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: ancient OS history, ARM is sort of channeling the IBM 360
Newsgroups: comp.arch
Date: Sun, 23 Jun 2024 17:46:11 -1000
John Levine <johnl@taugh.com> writes:
Not really. VS1 was basically MFT running in a single virtual address space. The early versions of VS2 were SVS, MVT running in a single virtual address space, and then MVS, where each job got its own address space. As Lynn has often explained, OS chewed up so much of the address space that they needed MVS to make enough room for programs to keep doing useful work.

re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360

... also SVS single 16mbyte virtual address space (sort of like running MVT in CP67 16mbyte virtual machine) to "protect" regions from each other still used the 360 4bit storage protection key ... so capped at 15 concurrent regions ... but systems were getting faster, much faster than disks were getting faster ... so needed increasing numbers of concurrently executing regions ... so went to MVS ... gave each region its own virtual address space (to keep them isolated/protected from each other). But MVS was becoming increasingly bloated both in real storage and amount it took in each region's virtual address space .... so needed more than 16mbyte real storage as well as more than 16mbyte virtual storage.

trivia: I was pontificating in the 70s about mismatch between increase in system throughput (memory & CPU) and increase in disk throughput. In early 80s wrote a tome that the relative system throughput of disk had declined by an order of magnitude since 360 was announced in the 60s (systems increase 40-50 times, disks increased 3-5 times). A disk division executive took exception and assigned the division performance group to refute my claims. After a couple of weeks, they basically came back and said that I had slightly understated the problem.

They then respun the analysis for a (mainframe user group) SHARE presentation for how to configure disks for increased system throughput (16Aug1984, SHARE 63, B874).

more recently there have been some references that cache-miss, memory access latency, when measured in count of processor cycles, is compareable to 60s disk access latency, when measured in count of 60s processor cycles (memory is new disk ... current memory access relative to processor speed is similar to 60s disk access relative to 60s processor speed)

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 23June1969 Unbundling Announcement

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 23June1969 Unbundling Announcement
Date: 24 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#26 IBM 23June1969 Unbundling Announcement

undergraduate at univ and working full time for the datacenter responsible for os/360 ... I had added 2741&tty ascii terminal support and editor (with CMS edit-syntax) to HASP ... and on way to east coast SHARE had side trip to Cornell to see the head (Bill Worley?) of SHARE HASP committee ... flew in from west coast to La Guardia and getting a (DC3?) flt to Ithaca ... from Marine(?) terminal ... sat in plane for an hour held up as thunderstorm went through ... then heavy turbulence whole time in air (and periodically throwing up). Got off 1st stop (Elmira), got a rental car, found motel for the night, and drove the rest of the way to Cornell the next morning.

Univ library did get ONR grant to do online catalog, part of the money went for 2321 datacell, also selected for IBM CICS betatest and supporting CICS added to tasks ... 1st bug was CICS wouldn't come up ... turns out there were some undocumented, hard-coded BDAM options and library had built their BDAM datasets with different set of options. trivia: some 25yrs later was brought into NIH's NLM to look at UMLS and a couple people still there that had built NLM's (BDAM-based) online catalog 25yrs earlier.

posts mentioning HASP/ASP, JES2/JES, and/or NJE/NJI:
https://www.garlic.com/~lynn/submain.html#hasp
CICS and/or BDAM posts
https://www.garlic.com/~lynn/submain.html#cics

... a few posts mentioning Worley and Cornell/HASP
https://www.garlic.com/~lynn/2015h.html#86 Old HASP
https://www.garlic.com/~lynn/2008.html#51 IBM LCS
https://www.garlic.com/~lynn/2007p.html#21 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2007j.html#79 IBM 360 Model 20 Questions
https://www.garlic.com/~lynn/2006e.html#1 About TLB in lower-level caches
https://www.garlic.com/~lynn/2004f.html#29 [Meta] Marketplace argument

... a few posts mentioning NIH, NLM, UMLS, BDAM
https://www.garlic.com/~lynn/2023d.html#7 Ingenious librarians
https://www.garlic.com/~lynn/2022c.html#39 After IBM
https://www.garlic.com/~lynn/2018c.html#13 Graph database on z/OS?
https://www.garlic.com/~lynn/2018b.html#54 Brain size of human ancestors evolved gradually over 3 million years
https://www.garlic.com/~lynn/2017f.html#34 The head of the Census Bureau just quit, and the consequences are huge
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2004p.html#0 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases

--
virtualization experience starting Jan1968, online at home since Mar1970

ancient OS history, ARM is sort of channeling the IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: ancient OS history, ARM is sort of channeling the IBM 360
Newsgroups: comp.arch
Date: Tue, 25 Jun 2024 08:33:39 -1000
John Levine <johnl@taugh.com> writes:
My recollection is that if you were using QSAM with multiple buffers and full track records it wasn't hard to keep the disk going at full speed. Later versions of OS do chained scheduling if you have enough buffers, doing several disk operations with one cnannel program.

https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360

When 360/67 was delivered to univ. I was hired fulltime responsible for OS/360 (tss/360 never came to production). Initially student fortran jobs ran over a minute (had run under a second on 709 tape->tape). I installed HASP which cuts the time in half. I then started redoing OS/360 SYSGEN to carefully place SYSTEM datasets and PDS (program library) members to optimize arm seek and multi-track search (channel program used to searc PDS directory for member location) cutting another 2/3rds to 12.9secs. Student Fortran never got better than 709 until I installed Univ. of Waterloo WATFOR.

when CP67 was 1st delivered to univ (3rd installation after cambridge itself and MIT lincoln labs), all I/O was FIFO and page I/O was single 4k page at time. CMS filesystem was 800 byte blocks and was usually single block transfer per channel program ... however if loading a program image and had been allocated contiguous sequential, it would transfer up to track worth in single channel program.

I redid disk I/O to ordered seek and redid page I/O to maximize page transfers per revolution (at same arm position). For 2301 fixed-head (paging) drum I got it from max around 70 4k/sec to peak of 270 4k/sec (max transfers per 2301 revolution).

There was problem with CMS filesystem that pretty much did scatter allocate (CMS sort of shared some CTSS heritage with UNIX going back through MULTICS) ... so it was rare file that it happened to have any sequentially allocated, contiguous records. Shortly after graduating and joining science center ... and seeing what multics was doing on flr above (science center was on 4th flr 545tech sq, multics was on the 5flr), I modified CMS filesystem to be 4k records and use a paged mapped API (and the underneath page I/O support would order for maximum transfers per revolution) ... and I also added to CMS program image generation, an attempt to maximize contiguous allocation, which could result close to max transfers/revolution in single channel program (when loading program).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

some recent posts mentioning work as undergraduate
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#65 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023f.html#17 Video terminals
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023e.html#10 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023c.html#26 Global & Local Page Replacement
https://www.garlic.com/~lynn/2023c.html#25 IBM Downfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Null terminated strings, Disconnect Between Coursework And Real-World Computers

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Null terminated strings, Disconnect Between Coursework And Real-World Computers
Newsgroups: alt.folklore.computers
Date: Wed, 26 Jun 2024 11:50:16 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Ah, the GE 645, the MULTICS machine. Was the H6180 very similar? Did Honeywell develop any entirely separate OS to take advantage of that kind of hardware? (Not sure if its computer business remained in existence long enough to do so...)

GCC's new fortification level: The gains and costs
https://developers.redhat.com/articles/2022/09/17/gccs-new-fortification-level
C programs routinely suffer from memory management problems. For several years, a _FORTIFY_SOURCE preprocessor macro inserted error detection to address these problems at compile time and run time. To add an extra level of security, _FORTIFY_SOURCE=3 has been in the GNU C Library (glibc) since version 2.34.

... snip ...

There were claims that null-terminated buffer string lengths were major Internet/TCPIP exploits and vulnerabilities in 90s (until big uptic in automagic execution of visual basic in data files). I pontificated that such overflows were not seen in IBM mainframe VS/Pascal TCP/IP implementation ... or MULTICS PL/I implementation

Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.acsac.org/2002/papers/classic-multics.pdf
2.2 Security as Standard Product Feature 2.3 No Buffer Overflows 2.4 Minimizing Complexity

Multics Security Evaluation: Vulnerability Analysis (1974)
https://csrc.nist.gov/publications/history/karg74.pdf

Turn of the century, I tried to do semantic analysis of CVE reports
https://cve.mitre.org/
and asked MITRE if they could require a little more precision in the reports (at the time Mitre responded that they were lucky to get any information in the reports). Archived posts
https://www.garlic.com/~lynn/2004e.html#43
then few months later, NIST published something similar
https://www.garlic.com/~lynn/2005b.html#20
https://www.garlic.com/~lynn/2005d.html#0

(C-language) buffer overflow posts
https://www.garlic.com/~lynn/subintegrity.html#buffer

--
virtualization experience starting Jan1968, online at home since Mar1970

This New Internet Thing, Chapter 8

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: This New Internet Thing, Chapter 8
Date: 28 Jun, 2024
Blog: Facebook
This New Internet Thing, Chapter 8
https://albertcory50.substack.com/p/this-new-internet-thing-chapter-8

... opel & ms/dos
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

... other trivia: before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on IBM cp/67-cms at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

(virtual machine) CP67 (precursor to vm370)
https://en.wikipedia.org/wiki/CP-67
other (virtual machine) history
https://www.leeandmelindavarian.com/Melinda#VMHist

I had taken two credit hr intro to fortran/computers and at the end of the semester was hired to rerite 1401 MPIO in assembler for 360/30. The univ. was getting a 360/67 to replace 709/1401 and temporarily the 1401 was replaced with 360/30 pending arrival of 360/67. The univ. shutdown datacenter on weekends and I got to have the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a bunch of hardware&software manuals and got to design my own monitor, device drivers, error recovery, storage management, etc ... and within a few weeks had 2000 card 360 assembler program. Within a year of taking intro class, 360/67 arrive and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition, so ran as 360/65 w/os360). Student fortran ran less than second on 709 (tape->tape) but well over a minute with fortgclg on 360/65 os/360. I install HASP and cuts time in half, then I start redoing STAGE2 SYSGEN to carefully place datasets and PDS member of optimized arm seek and multi-track search, cutting another 2/3rds to 12.9secs ... never got better than 709 until I install WATFOR.

Then cambridge came out to install cp67 (3rd install after cambridge itself and mit lincoln labs) and I mostly got to play with it during my weekend dedicated time. First few months concentrated on reWriting lots of code to cut CP67 CPU time running OS/360. Test stream ran 322secs on bare machine and initially 856secs in virtual machine (534seecs CP67 CPU) ... i get CP67 CPU down to 113secs. I then redo disk and (fixed-head) drum I/O for ordered seek arm queuing and chained page requests (optimizing transfers/revolution ... from purely FIFO single page transfers ... and then dynamic adaptive scheduling/resource-management and new page replacement algorithms.

HASP posts
https://www.garlic.com/~lynn/submain.html#hasp
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

part of 60s SHARE presentation on os/360 & CP/67 work at the univ
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

recent posts mentioning 709, 1401, MPIO, 360/30, 360/67, os/360, student fortran, watfor, getting CP67 benchmark from 534secs to 113secs
https://www.garlic.com/~lynn/2024c.html#94 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II

--
virtualization experience starting Jan1968, online at home since Mar1970

Chat Rooms and Social Media

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Chat Rooms and Social Media
Date: 29 Jun, 2024
Blog: Facebook
One of the GML inventors (in 1969, "law office application") at the science center, originally hired promoting cambridge wide-area network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

science center wide-area network morphing into corporate network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) ... technology also used for the corporate sponsored univ BITNET&EARN
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

Note in aug1976, TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
started offering their CMS-based online computer conferencing "free" to (IBM mainframe user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare
talk given at 1983 vm workshop
http://vm.marist.edu/~vmshare/read.cgi?fn=VMAGENDA&ft=MEMO&line=522
"networking research" on feb1987 agenda for vm workshop
http://vm.marist.edu/~vmshare/read.cgi?fn=VMWKABSA&ft=MEMO&line=347

I cut a deal with TYMSHARE to get monthly tape dumps of all VMSHARE files for putting up on internal network and systems (largest problem was with lawyers that were concerned internal employees would be contaminated exposed to unfiltered customer information).

Late 70s and early 80s I was blamed for online computer conferencing on the internal network, it really taking off spring of 1981 when I distributed trip report of visit to Jim Gray at TANDEM ... only about 300 directly participated but claims that upwards of 25,000 was reading (folklore when corporate executive committee was told, 5of6 wanted to fire me). One of the outcomes was researcher was paid to sit in back of my office for nine months taking notes on how I communicated (face-to-face, telephone, etc), also got copies of all my incoming and outgoing email and logs of all instant messages. Results were papers, conference talks, books and Stanford PHD (joint with language and computer AI). One statistic was I averaged TO and/or FROM email with some 270+ unique/different people per week for the nine months. Other details:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

1jan1983, arpanet transition from HOST/IMPs (approx. 100 network IMPs, and 255 HOSTS) to internetworking protocol ... at the time the internal network rapidly approaching 1000 ... archived post with world-wide corporate locations getting one or more new nodes during 1983 (one of the complications was corporate requirement that all links be encrypted and government resistance, especially when crossing national boundaries):
https://www.garlic.com/~lynn/2006k.html#8

Early 80s, also got HSDT project (T1 and faster computer links, both terrestrial and satellite) and was working with NSF director on interconnecting the NSF supercomputer centers ... initially was suppose to get $20M, then congress cuts the budget, some other things happen, and eventually RFP released (in part based on what we already had running). Preliminary agenda 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (possibly contributing was being blamed for online computer conferencing). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87)

co-worker at CSC and responsible for cambridge wide-area network ... we later transfer to SJR
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet and/or earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some recent posts mentioning TYMSHARE and VMSHARE
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#110 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024c.html#104 Virtual Memory Paging
https://www.garlic.com/~lynn/2024c.html#103 CP67 & VM370 Source Maintenance
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2024b.html#90 IBM User Group Share
https://www.garlic.com/~lynn/2024b.html#87 Dialed in - a history of BBSing
https://www.garlic.com/~lynn/2024b.html#81 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024.html#109 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#47 3330, 3340, 3350, 3380

--
virtualization experience starting Jan1968, online at home since Mar1970

GISH GALLOP

From: Lynn Wheeler <lynn@garlic.com>
Subject: GISH GALLOP
Date: 29 Jun, 2024
Blog: Facebook
I recognized it, but didn't know it had a name
The GISH GALLOP. It's a rhetorical technique in which someone throws out a fast string of lies, non-sequiturs, and specious arguments, so many that it is impossible to fact-check or rebut them in the amount of time it took to say them. Trying to figure out how to respond makes the opponent lok confused, because they don't know where to start grappling with the flood that has just him them. It is a form of gaslighting

... snip ...

more detail:
https://en.wikipedia.org/wiki/Gish_gallop
https://en.wikipedia.org/wiki/Brandolini%27s_law
https://en.wikipedia.org/wiki/Gaslighting

There's a name for Trump's technique to overwhelm the public with a stream of tiny lies (8Feb2017)
https://qz.com/905252/donald-trumps-lies-are-all-part-of-a-debate-tactic-called-the-gish-gallop
The Gallop works by leveraging two basic tendencies in human reasoning. First, it's easier and faster to make a false claim than it is to disprove one. Second, if an opponent fails to disprove every single one of the spurious statements you state, you can claim victory on the leftovers.

... snip ...

some past posts mentioning Trump's lies, and/or false, racist, fascist, birther statements
https://www.garlic.com/~lynn/2024.html#10 GOP Rep. Says Quiet Part Out Loud About Rejecting Border Deal
https://www.garlic.com/~lynn/2023f.html#63 We can't fight the Republican party's 'big lie' with facts alone
https://www.garlic.com/~lynn/2021h.html#39 Republicans delete webpage celebrating Trump's deal with Taliban
https://www.garlic.com/~lynn/2021h.html#21 A Trump bombshell quietly dropped last week. And it should shock us all
https://www.garlic.com/~lynn/2021g.html#93 A top spreader of coronavirus misinformation says he will delete his posts after 48 hours
https://www.garlic.com/~lynn/2021g.html#83 Trump Pressured DOJ to Declare Election Corrupt and 'Leave the Rest to Me'
https://www.garlic.com/~lynn/2021g.html#58 The Storm Is Upon Us
https://www.garlic.com/~lynn/2021f.html#82 Giuliani's Law License Is Suspended Over Trump Election Lies
https://www.garlic.com/~lynn/2021c.html#43 Just 15% of Americans say they like the way that Donald Trump conducts himself as president
https://www.garlic.com/~lynn/2021.html#44 American Fascism
https://www.garlic.com/~lynn/2019e.html#150 How Trump Lost an Evangelical Stalwart
https://www.garlic.com/~lynn/2019e.html#74 Eric Holder is the Official Missing from Discussions of the Bidens' Ukrainian Efforts
https://www.garlic.com/~lynn/2019e.html#72 CIA's top lawyer made 'criminal referral' on complaint about Trump Ukraine call
https://www.garlic.com/~lynn/2019e.html#59 Acting Intelligence Chief Refuses to Testify, Prompting Standoff With Congress
https://www.garlic.com/~lynn/2019d.html#95 The results of Facebook's anti-conservative bias audit are in
https://www.garlic.com/~lynn/2018d.html#80 Former NSA And CIA Director Michael Hayden: The 'Golden Age Of Electronic Surveillance' Is Ending

--
virtualization experience starting Jan1968, online at home since Mar1970

ancient OS history, ARM is sort of channeling the IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: ancient OS history, ARM is sort of channeling the IBM 360
Newsgroups: comp.arch
Date: Sat, 29 Jun 2024 23:39:22 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Work out the numbers. The CPU time necessary to copy a single record is most likely a small fraction of the time it takes to service an I/O interrupt.

And this is not taking into account the fact that I/O interrupts run at a higher priority than user-level tasks like copying buffers, anyway.


re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ARM is sort of channeling the IBM 360

back to IBM decision to add virtual memory to every 370 ... aka MVT storage management was so bad that regions had to be specified four times larger than used ... as result a normal/typical 1mbyte 370/165 only ran four regions concurrently, insufficient to keep system busy and justified. adding virtual memory, could run MVT in a 16mbyte virtual address space (aka VS2/SVS, sort of like running MVT in cp67 16mbyte virtual machine)... increasing number of concurrent running regions by factor of four times (up to 15) ... with little or no paging.

however, created different overhead (in part because the FS failure gave page-mapped filesystems a bad reputation) ... application filesystem channel programs were created (usually) by library routines in application space ... and the channel programs passed to EXCP/SVC0 for execution, now would have virtual addresses (rather than real required by I/O system) ... this required EXCP/SVC0 make a copy of every channel program, substituting real addresses for virtual addresses (initially done by crafting CP67's "CCWTRANS" into EXCP/SVC0).

370 systems getting larger were then banging against the concurrent region 15 limit imposed by the 4bit storage protection scheme keeping regions separated and had to transition from VS2/SVS single address space to VS2/MVS where every region was isolated in its own separate address space.

However, MVS was increasingly becoming quite bloated (also EXCP/SVC0 still had to make channel program copies) and device redrive (device idle between interrupt to starting next queued request) was a few thousand instructions. I had transferred to SJR and got to wander around datacenters in silicon valley including bldg14&15 (disk development and product test) across the street. They were doing prescheduled, 7x24, stand-alone testing and had mentioned they had recently tried MVS, but in that environment, MVS had 15min mean-time-between failure (besides its significant device idle waiting for device redrive) requiring manual re-ipl/reboot (aka test devices frequently violated all sort of rules & protocol). I offered to rewrite I/O supervisor to make it bullet proof and never fail allowing any amount of ondemand, concurrent testing ... improving productivity (as well as cutting to a couple hundred instructions between taking interrupt and redriving device).

post mentioning getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

ancient OS history, ARM is sort of channeling the IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: ancient OS history, ARM is sort of channeling the IBM 360
Newsgroups: comp.arch
Date: Sun, 30 Jun 2024 08:58:22 -1000
jgd@cix.co.uk (John Dallman) writes:
What was the problem with the memory management? My experience of systems without virtual memory doesn't include any that shared the machine among several applications, so I have trouble guessing.

re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ARM is sort of channeling the IBM 360

os/360 had "relative" (fixed) adcons ... that were resolved to fixed (real) address at initial program load (and couldn't change for the duration of the program) ... that also presented downside when moving to virtual memory paged environment ... could directly execute paged image from disk ... first, executable image had to be preloaded and all "relative' adcons modified for the specific instance. tss/360 had addressed that by keeping relative adcons, "relative" to base kept in data structure specifically for that instance (same paged shared executable image could appear at different addresses for different programs executing in different address spaces).

MVT memory management for dynamic allocation for data had horrendous problem with storage fragmentation and frequent requirement for large areass of contiguous storage. Storage fragmentation problem increased the longer the programs were running (and maintaining contiguous allocation as number of different, concurrently running regions increased). After joining IBM, I had done a page mapped filesystem for CMS and because CMS made extensive use of OS/360 compilers, I was constantly fighting the OS/360 adcon convention (wanting to constantly pre-fix the ADCONs as part of executable loading).

note before I had graduated, I had been hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all data processing into an independent busines unit). I thot Renton datacenter possibly largest in the world (couple hundred million in IBM 360s, sort of precursor to modern cloud megadatacenters), 360/65s arriving faster than could be installed, boxes constantly being staged in the hallways around the machine room. Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarged the room to install a 360/67 for me to play with when I wasn't doing other stuff).

While I was there they moved a two-processor, duplex 360/67 (originally for tss/36) up to Seattle from Boeing Huntsville. Huntsville had got the two processor machine with lots of 2250 graphic screens
https://en.wikipedia.org/wiki/IBM_2250

for (long running) CAD 2250 applications ... since tss/360 didn't have any CAD support ... they configured it as two single processor systems each running MVT13 ... which was severely affected by the fragmentation problem that increased the longer each CAD 2250 program was running. A few years before the decision was made to add virtual memory to all 370s ... Boeing Huntsville had modified MVT13 to run in virtual memory mode ... it didn't support paging ... but used the virtual memory to create contiguous virtual memory areas out of non-contiguous areas of real storage (to address the MVT storage management problem).

the initial solution adding virtual memory to all 370s (VS2/SVS) was to continue allow each executing region to continue specifying/reserving large, contiguous storage area ... but support paging and increase the number of concurrently executing regions.

The original OS/360 design point of running in small real storage contributed to the excessive disk activity ... where lots of system was fragmented into small pieces that would be sequentially loaded from disk for execution ... and then increasing the number of concurrently executing regions used to compensate for the large I/O disk filesystem wait time (somewhat analogous to processor poor cache hit rate)

cms page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
trying to map os/360 "relocation" adcons into cms page-mapped conventions
https://www.garlic.com/~lynn/submain.html#adcon

some posts mentioning Boeing CFO, Boeing Huntsville, MVT13
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023g.html#5 Vintage Future System
https://www.garlic.com/~lynn/2023g.html#4 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#110 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67

--
virtualization experience starting Jan1968, online at home since Mar1970

ancient OS history, ARM is sort of channeling the IBM 360

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: ancient OS history, ARM is sort of channeling the IBM 360
Newsgroups: comp.arch
Date: Sun, 30 Jun 2024 09:09:23 -1000
jgd@cix.co.uk (John Dallman) writes:
Are you sure? Per Wikipedia, the lowest-end real S/360, the Model 30, could run with only card equipment, running BPS, or with only tape drives, under TOS.

<https://en.wikipedia.org/wiki/IBM_System/360_Model_30#System_software>

BOS was a really minimal OS for an 8KB RAM machine with one disc drive, and DOS was less minimal.

The Model 30 was apparently one of the most popular machines in the early days of S/360. Being able to build such small machines was a strong commercial consideration for the company, and thus the architecture.


re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ARM is sort of channeling the IBM 360

at end of semester after taking two credit hr intro course, was hired to rewrite 1401 MPIO for (64kbyte) 360/30 ... which was running early os/360 PCP (single executable program at a time) ... had 2311 disks, tapes, and unit record. I first had a 2000 card program, assembled under os/360 but ran "stand-alone" ... being loaded with the "BPS" loader (had my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc). Making changes during development & test required brining up os/360 and re-assembly and then stand-alone loading.

I eventually got around to adding os/360 mode of operation using assembly option to generate either the stand-alone version or the os/360 version. It turns out the stand-alone version took 30mins to assemble, however the OS/360 version took an hour to assemble (OS/360 required DCB macro for each device and each DCB macro added six minutes elapsed time to assembly) ... aka stand-alone testing and then re-ipl for OS/360 30min re-assemble still took less time than OS/360 testing and hour re-assemble.

posts mentioning 1401 mpio 360/30 dcb macros and hour to assemble
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2023g.html#66 2540 "Column Binary"
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023c.html#28 Punch Cards
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022d.html#87 Punch Cards
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#26 Is this group only about older computers?
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021f.html#79 Where Would We Be Without the Paper Punch Card?
https://www.garlic.com/~lynn/2021f.html#19 1401 MPIO
https://www.garlic.com/~lynn/2021e.html#47 Recode 1401 MPIO for 360/30
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019e.html#19 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018d.html#104 OS/360 PCP JCL
https://www.garlic.com/~lynn/2018c.html#86 OS/360
https://www.garlic.com/~lynn/2017h.html#49 System/360--detailed engineering description (AFIPS 1964)
https://www.garlic.com/~lynn/2017f.html#36 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2013l.html#69 model numbers; was re: World's worst programming environment?
https://www.garlic.com/~lynn/2013h.html#4 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012l.html#98 PDP-10 system calls, was 1132 printer history
https://www.garlic.com/~lynn/2012i.html#21 IEBPTPCH questions
https://www.garlic.com/~lynn/2012e.html#98 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012d.html#7 PCP - memory lane
https://www.garlic.com/~lynn/2011h.html#15 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2010n.html#66 PL/1 as first language
https://www.garlic.com/~lynn/2010f.html#22 history of RPG and other languages, was search engine history
https://www.garlic.com/~lynn/2009h.html#56 Punched Card Combinations
https://www.garlic.com/~lynn/2009h.html#52 IBM 1401
https://www.garlic.com/~lynn/2007n.html#59 IBM System/360 DOS still going strong as Z/VSE
https://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards (Was: What's the linkage editor really wants?)
https://www.garlic.com/~lynn/2002o.html#19 The Hitchhiker's Guide to the Mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

GISH GALLOP

From: Lynn Wheeler <lynn@garlic.com>
Subject: GISH GALLOP
Date: 30 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#38 GISH GALLOP

another version:
Gish Gallop A debate tactic where one flood their opponent with numerous arguments, regardless of their accuracy or validity. The goal is to overwhelm the opponent, creating the impression of victory.

The key to this strategy is presenting these arguments with confidence, contrasting sharply with the opponent's struggle to debunk them.

When used in propaganda, it's called "firehosing" or the "Firehose of Falsehood," often seen in Russian propaganda.


... snip ...

Firehose of falsehood
https://en.wikipedia.org/wiki/Firehose_of_falsehood

--
virtualization experience starting Jan1968, online at home since Mar1970

Chat Rooms and Social Media

From: Lynn Wheeler <lynn@garlic.com>
Subject: Chat Rooms and Social Media
Date: 30 Jun, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#37 Chat Rooms and Social Media

Note cambridge science center 60s cp67 had instant messages between users on same machine. Then Pisa science center added SPM to CP67 ... where there was software program interface for anyting that CP67 would otherwise display on the terminal/screen. Ed had done RSCS/VNET that was used for the CP67 science center wide area network (that morphed into the internal corporate network and also used for the corporate sponsored univ BITNET) ... and supported "SPM" and instant message forwarding to users on other machines in the network. SPM was never released to customers ... although the RSCS/VNET shipped to customers contained support for SPM. In the morph of CP67->VM370 ... lots of features were initially dropped and/or simplified .... although SPM internally was migrated to VM370. During the 70s, VMCF, IUCV, and SMSG support were added to VM370 and RSCS/VNET also added support (even tho SPM was a superset combination of VMCF, IUCV and SMSG) which shipped to customer allowing "instant messaging" between users on BITNET. A number of software "CHAT" applications appeared that utilized this instant messaging capability.

Other trivia, circa 1980, a client/server multi-user spacewar game appeared internally that used SPM between CMS 3270 clients and the spacewar server (where clients could be on different vm370 network nodes ... utilizing the same RSCS/VNET instant messaging support). Then robot clients started appearing beating all human players (in part because of their faster reaction time) ... and the server was modified to increase power "use" non-linear as reactions dropped below nominal human time (to somewhat level the playing field).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET &/or EARN posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

recent posts mentioning cambridge cp67 wide-area network:
https://www.garlic.com/~lynn/2024d.html#19 IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#8 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#111 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
https://www.garlic.com/~lynn/2024c.html#30 GML and W3C
https://www.garlic.com/~lynn/2024c.html#22 FOILS
https://www.garlic.com/~lynn/2024b.html#109 IBM->SMTP/822 conversion
https://www.garlic.com/~lynn/2024b.html#104 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#86 Vintage BITNET
https://www.garlic.com/~lynn/2024b.html#82 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#32 HA/CMP
https://www.garlic.com/~lynn/2024.html#110 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#65 IBM Mainframes and Education Infrastructure
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2023f.html#8 Internet
https://www.garlic.com/~lynn/2023f.html#4 GML/SGML separating content and format
https://www.garlic.com/~lynn/2022d.html#72 WAIS. Z39.50

posts mentioning (internal) multi-user spacewar game
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#81 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2014g.html#93 Costs of core
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure

--
virtualization experience starting Jan1968, online at home since Mar1970

Chat Rooms and Social Media

From: Lynn Wheeler <lynn@garlic.com>
Subject: Chat Rooms and Social Media
Date: 01 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#37 Chat Rooms and Social Media
https://www.garlic.com/~lynn/2024d.html#43 Chat Rooms and Social Media

blamed for (company) online computer conferencing ... mentioned upthread; one of the other outcomes was official software (supporting both usenet-like servers and mailing list modes) and officially sanctioned, moderated online forums

part of internet passing corporate network in nodes in mid/late 80s, was 1) corporate mandate that all links were encrypted and 2) the communication group fiercely fighting off client/server and distributed computing ... including (mostly) keeping network nodes to mainframes and everything else was terminal emulation. Late 80s, senior disk engineer got a talk scheduled at an internal, annual, world-wide communication group conference supposedly on 3174 performance, but opened his talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing a drop in disk sales with data fleeing mainframe datacenters to more distributed computing friendly platforms. The disk division had come up with a number of solutions, but they were constantly being vetoed by the communication group (with communication group corporate strategic ownership of everything that crosses datacenter walls). Communication group datacenter stranglehold wasn't just disks and a couple years later, IBM has one of the largest losses in the history of US companies ... and was being reorged into the 13 "baby blues" in preparation for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk (corporate hdqtrs) asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

Old email from person in Paris charged with formation of EARN (BITNET in Europe) looking for network applications
https://www.garlic.com/~lynn/2001h.html#email840320
shortly later mailing list, listserv appears in Paris.
https://www.lsoft.com/products/listserv-history.asp
https://en.wikipedia.org/wiki/LISTSERV

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
communication group stranglehold on mainframe datacenter posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
internal corporate network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET and/or EARN posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Economic Mess

From: Lynn Wheeler <lynn@garlic.com>
Subject: Economic Mess
Date: 01 Jul, 2024
Blog: Facebook
Jan1999 I was asked to help prevent the coming economic mess (we failed). I was told some investment bankers had walked away "clean" from the S&L Crisis, were then running Internet IPO mills (invest a million, hype, IPO for a couple billion, needed to fail to leave the field clear for the next round) and were predicted to get into securitiezed loans and mortgages next. I was to improve the integrity of supporting documents for securtized mortgages&loans. Then (from Oct2008 congressional testimony), they found they could pay rating agencies for triple-A ratings (when the agencies knew they weren't worth triple-A) enabling no-documentation, liar loans/mortgages. Then they found they could design securitized loans/mortgages to fail, pay for triple-A, sell into the bond market and take out CDS "gambling bets". The largest holder of CDS of CDS "gambling bets" was AIG and negotiating to pay off at 50cents on the dollar, when the SECTREAS (who had convinced congress to provide $700B in TARP funds, supposedly to buy off-book toxic CDOs from too-big-to-fail) and has AIG sign a document that they couldn't sue the those taking out CDS "gambling bets" and to take TARP funds to pay off at face value. The largest recipient of TARP funds was AIG and the largest recipient of face value CDS payoffs was the firm formally headed by the SECTREAS.

Jan2009, I was asked to HTML'ize the Pecora Hearings (30s congressional hearings into the '29 crash, that resulted in Glass-Steagal; had been scanned the fall before (comments that the new congress might have an appetite to do something) with lots of URLs between what happened this time and what happened then. I work on it for a few weeks and then get a call saying it won't be needed after all (comments that capital hill was totally buried under enormous mountains of wallstreet cash). Trivia: $700B TARP funds ... although supposedly for buying TBTF off-book toxic CDOs would have hardly made a dent; end of 2008, just the four largest TBTF were carrying $5.2T in off-book, toxic CDOs. TARP funds went for other stuff and the Federal Researve handled the "real" TBTF bail-out behind the scenes.

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
regulatory "capture" posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
too-big-to-fail, too-big-to-prosecute, too-big-to-jail posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
regulatory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
federal chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
ZIRP funds
https://www.garlic.com/~lynn/submisc.html#zirp
glass-steagall and/or pecora hearing posts
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall
S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

GISH GALLOP

From: Lynn Wheeler <lynn@garlic.com>
Subject: GISH GALLOP
Date: 02 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#38 GISH GALLOP
https://www.garlic.com/~lynn/2024d.html#42 GISH GALLOP

"ends justify the means" ... saying/doing what ever was necessary to achieve objectives ... including stealth take-over of the conservative Republican Party
https://www.ineteconomics.org/perspectives/blog/meet-the-economist-behind-the-one-percents-stealth-takeover-of-america
The tycoon knew that the project was extremely radical, even a "revolution" in governance, but he talked like a conservative to make his plans sound more palatable.
... snip ...

lying more than ever
https://www.washingtonpost.com/politics/2020/01/20/president-trump-made-16241-false-or-misleading-claims-his-first-three-years/
https://www.forbes.com/sites/davidmarkowitz/2020/05/05/trump-is-lying-more-than-ever-just-look-at-the-data/
https://www.washingtonpost.com/politics/2020/07/13/president-trump-has-made-more-than-20000-false-or-misleading-claims/

https://www.amazon.com/Democracy-Chains-History-Radical-Stealth-ebook/dp/B01EH1EL7A/
While some others in the movement called themselves conservatives, he knew exactly how radical his cause was. Informed early on by one of his grantees that the playbook on revolutionary organization had been written by Vladimir Lenin, Koch dutifully cultivated a trusted "cadre" of high-level operatives, just as Lenin had done, to build a movement that refused compromise as it devised savvy maneuvers to alter the political math in its favor.

... snip ...

Take-Over Republican Party
https://www.orlandosentinel.com/politics/os-ne-mac-stipanovich-republican-20191224-tz7bjps56jazbcwb3ficlnacqa-story.html
As for the party, Trump hasn't transformed the party, in my judgment, as much as he has unmasked it. There was always a minority in the Republican party -- 25, 30 percent -- that, how shall we say this, that hailed extreme views, aberrant views. They've always been there, from the John Birchers in the '50s, who thought Dwight Eisenhower was a communist, to the Trump folks today who think John McCain's a traitor. They had different names -- the religious right, tea partiers -- but they've always been there. They were a fairly consistent, fairly manageable minority who we, the establishment, enabled and exploited.

... snip ...

https://www.washingtonpost.com/news/monkey-cage/wp/2017/05/17/trumps-values-are-abhorrent-to-the-federalist-society-of-conservative-lawyers-that-doesnt-stop-them-from-helping-him/
So the Federalist Society is part of an attempt to build an alternative legal elite; one capable of moving conservative and libertarian ideas into the mainstream. And it has worked. As evidenced by Trump's repeated flaunting of his list of potential judges as "Federalist Society approved," the society -- now a vast network of tens of thousands of conservative and libertarian lawyers and judges -- has evolved into the de facto gatekeeper for right-of-center lawyers aspiring to government jobs and federal judgeships under Republican presidents.

... snip ...

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

some posts mentioning false, misleading, lying
https://www.garlic.com/~lynn/2023f.html#63 We can't fight the Republican party's 'big lie' with facts alone
https://www.garlic.com/~lynn/2021h.html#21 A Trump bombshell quietly dropped last week. And it should shock us all
https://www.garlic.com/~lynn/2021c.html#43 Just 15% of Americans say they like the way that Donald Trump conducts himself as president
https://www.garlic.com/~lynn/2021.html#44 American Fascism
https://www.garlic.com/~lynn/2021.html#30 Trump and Republican Party Racism
https://www.garlic.com/~lynn/2021.html#24 Trump Tells Georgia Official to Steal Election in Recorded Call
https://www.garlic.com/~lynn/2019e.html#150 How Trump Lost an Evangelical Stalwart
https://www.garlic.com/~lynn/2019d.html#99 Trump claims he's the messiah. Maybe he should quit while he's ahead
https://www.garlic.com/~lynn/2019d.html#95 The results of Facebook's anti-conservative bias audit are in

posts mentioning federalist society and/or heritage foundation
https://www.garlic.com/~lynn/2024c.html#26 The Last Thing This Supreme Court Could Do to Shock Us
https://www.garlic.com/~lynn/2023d.html#99 Right-Wing Think Tank's Climate 'Battle Plan' Wages 'War Against Our Children's Future'
https://www.garlic.com/~lynn/2023d.html#41 The Architect of the Radical Right
https://www.garlic.com/~lynn/2023c.html#51 What is the Federalist Society and What Do They Want From Our Courts?
https://www.garlic.com/~lynn/2022g.html#37 GOP unveils 'Commitment to America'
https://www.garlic.com/~lynn/2022g.html#14 It Didn't Start with Trump: The Decades-Long Saga of How the GOP Went Crazy
https://www.garlic.com/~lynn/2022d.html#4 Alito's Plan to Repeal Roe--and Other 20th Century Civil Rights
https://www.garlic.com/~lynn/2022c.html#118 The Death of Neoliberalism Has Been Greatly Exaggerated
https://www.garlic.com/~lynn/2022.html#107 The Cult of Trump is actually comprised of MANY other Christian cults
https://www.garlic.com/~lynn/2021f.html#63 'A perfect storm': Airmen, F-22s struggle at Eglin nearly three years after Hurricane Michael
https://www.garlic.com/~lynn/2021e.html#88 The Bunker: More Rot in the Ranks
https://www.garlic.com/~lynn/2020.html#6 Onward, Christian fascists
https://www.garlic.com/~lynn/2020.html#5 Book: Kochland : the secret history of Koch Industries and corporate power in America
https://www.garlic.com/~lynn/2020.html#4 Bots Are Destroying Political Discourse As We Know It
https://www.garlic.com/~lynn/2020.html#3 Meet the Economist Behind the One Percent's Stealth Takeover of America
https://www.garlic.com/~lynn/2019e.html#127 The Barr Presidency
https://www.garlic.com/~lynn/2019d.html#97 David Koch Was the Ultimate Climate Change Denier
https://www.garlic.com/~lynn/2019c.html#66 The Forever War Is So Normalized That Opposing It Is "Isolationism"
https://www.garlic.com/~lynn/2019.html#34 The Rise of Leninist Personnel Policies
https://www.garlic.com/~lynn/2012c.html#56 Update on the F35 Debate
https://www.garlic.com/~lynn/2012b.html#75 The Winds of Reform
https://www.garlic.com/~lynn/2012.html#41 The Heritage Foundation, Then and Now

--
virtualization experience starting Jan1968, online at home since Mar1970

E-commerce

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: E-commerce
Date: 02 Jul, 2024
Blog: Facebook
doesn't mention whether "other applications" included direct consumer purchases

Much more Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167
Ann rose up to become Vice President of the Integrated Systems Division at Tymshare, from 1976 to 1984, which did online airline reservations, home banking, and other applications. When Tymshare was acquired by McDonnell-Douglas in 1984, Ann's position as a female VP became untenable, and was eased out of the company by being encouraged to spin out Gnosis, a secure, capabilities-based operating system developed at Tymshare. Ann founded Key Logic, with funding from Gene Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl mainframes. After closing Key Logic, Ann became a consultant, leading to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
TYMNET
https://en.wikipedia.org/wiki/Tymnet

trivia: I was brought in to evaluate GNOSIS as part of the 1984 spin-off

other trivia: aug1976, TYMSHARE started offering its (VM370/)CMS-based online computer conferencing free to SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for putting up on internal systems and network.

after leaving IBM in early 90s, I was brought in as consultant into small client/server company, two former Oracle employees (that I had worked with on cluster scale-up for IBM HA/CMP) were there, responsible for something called "commerce server" doing credit card transactions, the startup had also done this invention called "SSL" they were using, it is now frequently called "electronic commerce". I had responsibility for everything between webservers and the financial payment networks. I then did a talk on "Why The Internet Wasn't Business Critical Dataprocessing" (that Postel sponsored at ISI/USC), based on the reliability, recovery & diagnostic software, procedures, etc I did for e-commerce. Payment networks had a requirement that their trouble desks doing first level problem determination within five minutes. Early trials had a major sports store chain doing internet e-commerce ads during week-end national football game half-times and there were problems being able to connect to payment networks for credit-card transactions ... after three hrs, it was closed as "NTF" (no trouble found).

payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HA/CMP posts
https://www.garlic.com/~lynn/subnetwork.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available

posts mentioning Ann Hardy and TYMSHARE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023e.html#62 IBM Jargon
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#97 Fortran
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022g.html#92 TYMSHARE
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021k.html#0 Women in Computing
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021h.html#98 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing
https://www.garlic.com/~lynn/2008s.html#3 New machine code

posts mentioning "commerce server", former oracle employees, "SSL", Postel sponsoring my talk:
https://www.garlic.com/~lynn/2024c.html#82 Inventing The Internet
https://www.garlic.com/~lynn/2024c.html#62 HTTP over TCP
https://www.garlic.com/~lynn/2024b.html#106 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#73 Vintage IBM, RISC, Internet
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#38 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#23 The evolution of Windows authentication
https://www.garlic.com/~lynn/2023e.html#37 IBM 360/67
https://www.garlic.com/~lynn/2023e.html#17 Maneuver Warfare as a Tradition. A Blast from the Past
https://www.garlic.com/~lynn/2023d.html#85 Airline Reservation System
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023d.html#46 wallpaper updater
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022.html#129 Dataprocessing Career
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#10 System Availability
https://www.garlic.com/~lynn/2021e.html#74 WEB Security
https://www.garlic.com/~lynn/2021e.html#56 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#68 Online History
https://www.garlic.com/~lynn/2017g.html#14 Mainframe Networking problems
https://www.garlic.com/~lynn/2017f.html#100 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#14 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2017e.html#11 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2016e.html#124 Early Networking

--
virtualization experience starting Jan1968, online at home since Mar1970

Architectural implications of locate mode I/O

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architectural implications of locate mode I/O
Newsgroups: comp.arch
Date: Tue, 02 Jul 2024 17:36:50 -1000
mitchalsup@aol.com (MitchAlsup1) writes:
Once you recognize that I/O is eating up your precious CPU, and you get to the point you are willing to expend another fixed programmed device to make the I/O burden manageable, then you basically have CDC 6600 Peripheral Processors, programmed in code or microcode.

re:
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360

QSAM library does serialization for the application ... get/put calls does "wait" operations inside the library for I/O complete. BSAM library has the applications performing serialization with their own "wait" operations for read/write calls (application handling overlap of possible processing with I/O).

Recent IBM articles mentioning that QSAM default multiple buffering was established years ago as "five" ... but current recommendations are for more like 150 (for QSAM to have high processing overlapped with I/O). Note while they differentiate between application buffers and "system" buffers (for move & locate mode), QSAM (system) buffers run was part of application address space but are managed as part of QSAM library code.

Both QSAM & BSAM libraries build the (application) channel programs ... and since OS/360 move to virtual memory for all 370s, they all have (application address space) virtual addresses. When the library code passes the channel program to EXCP/SVC0, a copy of the passed channel programs are made, replacing the virtual addresses in the CCWs with "real addresses". QSAM GET can return the address within its buffers (involved in the actual I/O, "locate" mode) or copy data from its buffers to the application buffers ("move" mode). The references on the web all seemed to reference "system" and "application" buffers, but I think it would be more appropriate to reference them as "library" and "application" buffers.

370/158 had "integrated channels" ... the 158 engine ran both 370 instruction set microcode and the integrated channel microcode.

when future system imploded, there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 303x&3081 efforts in parallel.

for 303x they created "external channels" by taking a 158 engine with just the integrated channel microcode (and no 370 microcode) for the 303x "channel director".

a 3031 was two 158 engines, one with just the 370 microcode and a 2nd with just the integrated channel microcode

a 3032 was 168-3 remapped to use channel director for external channels.

a 3033 started with 168-3 logic remapped to 20% faster chips.

Jan1979, I had lots of use of an early engineering 4341 and was con'ed into doing a (cdc6600) benchmark for national lab that was looking for 70 4341s for computer farm (sort of leading edge of the coming cluster supercomputing tsunami). Benchmark was fortran compute doing no I/O and executed with nothing else running.

4341: 36.21secs, 3031: 37.03secs, 158: 45.64secs

now integrated channel microcode ... 158 even with no I/O running was still 45.64secs compared to the same hardware in 3031 but w/o channel microcode: 37.03secs.

I had a channel efficiency benchmark ... basically how fast can channel handle each channel command word (CCW) in a channel program (channel architecture required it fetched, decoded and executed purely sequentially/synchronously). Test was to have two disk read ("chained") CCWs for two consecutive records. Then add a CCW between the two disk read CCWs (in the channel program) ... which results in a complete revolution to read the 2nd data record (because the latency, while disk is spinning, in handling the extra CCW separating the two record read CCWs).

Then reformat the disk to add a dummy record between each data record, gradually increasing the size of the dummy record until the two data records can be done in single revolution.

The size of the dummy record required for single revolution reading the two records was the largest for 158 integrated channel as well as all the 303x channel directors. The original 168 external channels could do single revolution with the smallest possible dummy record (but a 168 with channel director, aka 3032, couldn't, nor could 3033) ... also the 4341 integrated channel microcode could do it with smallest possible dummy record.

The 3081 channel command word (CCW) processing latency was more like 158 integrated channel (and 303x channel directors)

Second half of the 80s, I was member of Chesson's XTP TAB ... found a comparison between typical UNIX at the time for TCP/IP had on the order of 5k instructions and five buffer copies ... while compareable mainframe protocol in VTAM had 160k instructions and 15 buffer copies (larger buffers on high-end cache machines ... the cache misses for the 15 buffer copies could exceed processor cycles for the 160k instructions).

XTP was working on no buffer copies and streaming I/O ... attempting to process TCP as close as possible to no buffer copy disk I/O. Scatter/gather I/O for separate header and data ... also move from header CRC protocol .... to trailor CRC protocol ... instead of software prescanning the buffer to calculate CRC (for placing in header) ... outboard processing the data as it streams through, doing the CRC and then appended to the end of the record.

When doing IBM's HA/CMP and working with major RDBMS vendors on cluster scaleup in late 80s/early 90s, there was lots of references to POSIX light-weight threads and asynchronous I/O for RDBMS (with no buffer copies) and the RDBMS managing large record cache.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

REXX and DUMPRX

From: Lynn Wheeler <lynn@garlic.com>
Subject: REXX and DUMPRX
Date: 02 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#16 REXX and DUMPRX

quick web search for when rexx added to mvs

rexx history announce 1987 for tso & mvs
https://www.rexxla.org/rexxlang/history/mfc/rexxhist.html

apr1988
https://speleotrove.com/rexxhist/rexxtso.html

above mentions SAA ... there was joke about communication group trying to get lots of IBM/PC applications running (or at least tied to) IBM mainframes (part of fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm).

posts mentioning communication group fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Architectural implications of locate mode I/O

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architectural implications of locate mode I/O
Newsgroups: comp.arch
Date: Wed, 03 Jul 2024 07:42:42 -1000
re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O

little "dependable" I/O drift

1980, IBM STL (since renamed SVL) was bursting at the seams and they were moving 300 people (and their 3270 terminals) from the IMS (DBMS) group to offsite bldg with dataprocessing service back to STL datacenter. They had tried "remote 3270", but found the human factors unacceptable. I get con'ed into implementating channel extender support (A220, A710/A715/A720, A510/A515) ... allowing channel attached 3270 cntrolers to be located at the offsite bldg, connected to mainframes back in STL ... with no perceived difference in human factors (quarter second or better trivial response).
https://en.wikipedia.org/wiki/Network_Systems_Corporation
https://en.wikipedia.org/wiki/HYPERchannel

STL had spread 3270 controller boxes across all the channels with 3830 disk controller boxes. Turns out the A220 mainframe channel-attach boxes (used for channel extender) had significantly lower channel busy for the same amount of 3270 terminal traffic (as 3270 channel-attach controllers) and as a result the throughput for IMS group 168s (with NSC A220s) increased by 10-15% ... and STL considered using NSC HYPERChannel A220 channel-extender configuration, for all 3270 controllers (even those within STL). NSC tried to get IBM to release my support, but a group in POK playing with some fiber stuff got it vetoed (concerned that if it was in the market, it would make it harder to release their stuff).

trivia: The vendor eventually duplicated my support and then the 3090 Product Administrator tracked me down. He said that 3090 channels were designed to have an aggregate total 3-5 channel errors (EREP reported) for all systems&customers over a year period and there were instead 20 (extra, turned out to be channel-extender support). When I got a unrecoverable telco transmission error, I would reflect a CSW "channel-check" to the host software. I did some research and found that if an IFCC (interface control check) was reflected instead, it basically resulted in the same system recovery activity (and got vendor to change their software from "CC" to "IFCC").

I was asked to give a talk at NASA dependable computing workshop and used the 3090 example as part of the talk
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html

About the same time, the IBM communication group was fighting off the release of mainframe TCP/IP ... and when that got reversed, they changed their tactic and claimed that since they had corporate ownership of everything that crossed datacenter walls, TCP/IP had to be released through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then did RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

other trivia: 1988, the IBM branch office asks me if I could help LLNL (national lab) "standardize" some fiber stuff they were playing with, which quickly becomes FCS (fibre-channel standard, including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, aggregate 200mbyte/sec. Then the POK "fiber" group gets their stuff released in the 90s with ES/9000 as ESCON, when it was already obsolete, 17mbytes/sec. Then some POK engineers get involved with FCS and define a heavy-weight protocol that drastically cuts the native throughput which eventually ships as FICON. Most recent public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON (over 104 FCS). About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Note also, IBM documents keeping SAPs (system assist processors that do I/O) to 70% CPU (which would be more like 1.5M IOPS).

after leaving IBM in early 90s, I was brought in as consultant into small client/server company, two former Oracle employees (that I had worked with on cluster scale-up for IBM HA/CMP) were there, responsible for something called "commerce server" doing credit card transactions, the startup had also done this invention called "SSL" they were using, it is now frequently called "electronic commerce". I had responsibility for everything between webservers and the financial payment networks. I then did a talk on "Why The Internet Wasn't Business Critical Dataprocessing" (that Postel sponsored at ISI/USC), based on the reliability, recovery & diagnostic software, procedures, etc I did for e-commerce. Payment networks had a requirement that their trouble desks doing first level problem determination within five minutes. Early trials had a major sports store chain doing internet e-commerce ads during week-end national football game half-times and there were problems being able to connect to payment networks for credit-card transactions ... after three hrs, it was closed as "NTF" (no trouble found).

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
electronic commerce gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gaeway
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
available posts
https://www.garlic.com/~lynn/submain.html#available

--
virtualization experience starting Jan1968, online at home since Mar1970

Email Archive

From: Lynn Wheeler <lynn@garlic.com>
Subject: Email Archive
Date: 03 Jul, 2024
Blog: Facebook
I have all email back to mid-1977. Most email (software source & documents) from 60s up to mid-1977 were on triple redundant tapes in the IBM Research tape library ... until mid-80s when IBM Almaden Research had an operational problem with random tapes being mounted as scratch and I lost a dozen tapes (including the triple-redundant archive tapes).

just prior to the almaden tape library fiasco ... I got a request from melinda (for her historical work)
https://www.leeandmelindavarian.com/Melinda#VMHist

for the (early 70s) original multi-level source maintenance implementation ... and was able to pull it off archive tapes and email it to her (before they got overwritten). old email (in archived posts) ... has reference to source maintenance from summer 1970
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850908

some old archived posts about multi-level source maint.
https://www.garlic.com/~lynn/2023e.html#28 Copyright Software
https://www.garlic.com/~lynn/2021k.html#51 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2014e.html#35 System/360 celebration set for ten cities; 1964 pricing for oneweek
https://www.garlic.com/~lynn/2011c.html#3 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2006w.html#48 vmshare
https://www.garlic.com/~lynn/2006w.html#42 vmshare


internet trivia: person that did DNS, a decade+ earlier as MIT co-op student at the science center worked on the multi-level source maintenance, few old posts
https://www.garlic.com/~lynn/2024b.html#74 Internet DNS Trivia
https://www.garlic.com/~lynn/2022g.html#45 Some BITNET (& other) History
https://www.garlic.com/~lynn/2019c.html#90 DNS & other trivia
https://www.garlic.com/~lynn/2007k.html#33 Even worse than UNIX

--
virtualization experience starting Jan1968, online at home since Mar1970

Cray

From: Lynn Wheeler <lynn@garlic.com>
Subject: Cray
Date: 04 Jul, 2024
Blog: Facebook
mid-80s, communication group fiercely fighting off client/server and distributed computing and trying to block mainframe TCP/IP support, when that was overturned they changed tactic and said that since they had corporate responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate of 44kbytes/sec using nearly whole 3090 processor. I then did support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).

The last product we did at IBM was HA/6000, originally for NYTimes to port their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, and Ingres) that had VAXCluster support in same source base with Unix (I do a distributed lock manager with VAXCluster API semantics and lots of scale-up, to ease the port). AWD/Hester was saying we would have 16-system clusters by mid-92 and 128-system clusters by ye-92. Lots of work with LLNL (besides fiber channel standard/FCS) including having their Cray Lincs/UNIREE ported to HA/CMP. Then towards end of Jan92, cluster scale-up was transferred for announce as IBM Supercomputer (for technical/scientific only) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). We assumed contributing was mainframe DB2 group complaining if we were allowed to proceed that it would be far ahead of them.

17feb1992, IBM establishes laboratory to develop parallel systems (pg8)
https://archive.org/details/sim_computerworld_1992-02-17_26_7

Administration pushing national labs to commercialize technology as part of making US more competitive, LANL supercomputer work commercialized as DATATREE (by General Atomics), LLNL commercializes LINCS as UNITREE and NCAR commercializes thier supercomputer work as "Mesa Archival" ... HA/CMP was involved in working with all three.

Late 90s, did some consulting work for Steve Chen (responsible for Y-MP) who was then CTO at Sequent (before IBM bought Sequent and shut it down).

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

some posts mentioning LLNL LINCS/UNITREE
https://www.garlic.com/~lynn/2023g.html#25 Vintage Cray
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021h.html#93 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2021c.html#52 IBM CEO
https://www.garlic.com/~lynn/2019b.html#60 S/360
https://www.garlic.com/~lynn/2019b.html#57 HA/CMP, HA/6000, Harrier/9333, STK Iceberg & Adstar Seastar
https://www.garlic.com/~lynn/2012i.html#54 IBM, Lawrence Livermore aim to meld supercomputing, industries
https://www.garlic.com/~lynn/2011n.html#34 Last Word on Dennis Ritchie
https://www.garlic.com/~lynn/2008l.html#20 IBM-MAIN longevity
https://www.garlic.com/~lynn/2006u.html#27 Why so little parallelism?
https://www.garlic.com/~lynn/2005e.html#16 Device and channel
https://www.garlic.com/~lynn/2005e.html#15 Device and channel
https://www.garlic.com/~lynn/2002k.html#31 general networking is: DEC eNet: was Vnet : Unbelievable

recent posts mentioning Thornton, Cray, NSC, HYPERchannel
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#27 STL Channel Extender
https://www.garlic.com/~lynn/2024b.html#10 Some NSFNET, Internet, and other networking background
https://www.garlic.com/~lynn/2024b.html#8 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#9 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023e.html#109 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023e.html#107 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2023e.html#106 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022f.html#93 NCAR Fileserver
https://www.garlic.com/~lynn/2022b.html#98 CDC6000
https://www.garlic.com/~lynn/2022b.html#69 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2021k.html#109 Network Systems
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021h.html#93 CMSBACK, ADSM, TSM

--
virtualization experience starting Jan1968, online at home since Mar1970

16June1911, IBM Incorporation Day

From: Lynn Wheeler <lynn@garlic.com>
Subject: 16June1911, IBM Incorporation Day
Date: 04 Jul, 2024
Blog: Facebook
16June1911, IBM Incorporation Day

CEO Learson tried (and failed) to block bureaucrats, careerists, MBAs from destroying Watson culture & legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

FS (failing) significantly accelerated the rise of the bureaucrats, careerists, and MBAs .... From Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive

... snip ...

note: since FS was going to replace 360/370, internal politics was killing off 370 efforts and the lack of new IBM 370s during the period is credited with given the clone 370 system makers their market foothold. Claim is a major motivation for FS was as a complex countermeasure to clone compatible 360/370 I/O controllers ... but it resulted in giving a rise to the clone 370 system makers. Less than two decades later, IBM has one of the largest losses in the history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Architectural implications of locate mode I/O

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architectural implications of locate mode I/O
Newsgroups: comp.arch
Date: Fri, 05 Jul 2024 15:35:50 -1000
John Levine <johnl@taugh.com> writes:
By putting most of the logic into the printer controller, the 1403 was not just faster, but only took a small fraction of the CPU so the whole system could do more work to keep the printer printing.

re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O

360 "CKD DASD" and multi-track search trade-off. 360s had relatively little real storage (for caching information) and slow processor, so for program libraries on disk ... they created "PDS" format and had (disk resident, cylinder aligned) directory that contained records for name of each program and its disk location in the library. To load a program, it first did a "multi-track" search of of the PDS directory started at track 0 of the 1st cylinder of the directory ... ran until it found name match (or reached end of cylinder). If name wasn't found at end of cylinder, it would restart if there were additional cylinders in the directory. Trivia: the searched-for program name was in processor memory and the multi-track search operation would refetch the name every time it did a compare for matching name (with records in the PDS directory), monopolizing channel, controller, & disk.

Roll forward to 1979, a large national grocery chain had large loosely-coupled complex of multiple 370/168 systems sharing string of DASD containing the PDS dataset of store controller applications ... and was experiencing enormous throughput problems. All the usual corporate performance specialists had been dragged through the datacenter with hopes that they could address the problem ... until they eventually got around to calling me. I was brought into large classroom with tables covered with large stacks of activity/performance reports for each system. After 30-40 mins examaning the reports ... I being to realize the aggregate activity (summed across all systems) for specific shared disk was peaking at 6-7 (total) I/O operations ... and corresponding with severe performance problem. I asked what was on that disk and was told it was the (shared) store controller program library for all the stores in all regions and 168 systems; which I then strongly suspected it was the PDS multi-track search perfoerformance that I had grappled with as undergraduate in the 60s.

The store controller PDS dataset was quite large and had a three cylinder directory, resident on 3330 disk drive ... implying that on the avg, a search required 1.5 cylinders (and two I/Os), the first multi-track search I/O for all 19 cylinders would be 19/60=.317sec (during which time that processor's channel was busy, and the shared controller was also busy ... blocking access to all disks on that string, not just the speecific drive, for all systems in the complex) and the 2nd would be 9.5/60=.158sec ... or .475sec for the two ... plus a seek to move the disk arm to PDS directory, another seek to move the disk arm to the cylinder where the program was located ... approx. .5+secs total for each store controller program library load (involving 6-7 I/Os) or two program loads per second aggregate serving all stores in the country.

The store controller PDS program library was then split across set of three disks, one dedicated (non-shared) set for each system in the complex.

I was also doing some work on System/R (original sql/releational RDBMS) and taking some flak from the IMS DBMS group down the road. The IMS group were complaining that RDBMS had twice the disk space (for RDBMS index) and increased the number of disk I/Os by 4-5 times (for processing RDBMS index). Counter was that the RDBMS index significantly reduced the manual maintenance (compared to IMS). By early 80s, disk price/bit was significantly plummeting and system real memory significantly increased useable for RDBMS caching, reducing physical I/Os (while manual maintenance skills costs were significantly increasing).

other trivia: when I transfer to San Jose, I got to wander around datacenters in silicon valley, including disk engineering & product test (bldg14&15) across the street. They were doing prescheduled, 7x24, stand-alone mainframe testing. They mentioned they had recently tried MVS, but it had 15min mean-time-between-failure, requiring manual re-ipl/reboot in that environment. I offered to rewrite I/O supervisor to make it bullet-proof and never fail enabling any amount of on-demand, concurrent testing (greatly improving productivity). Downside was they would point their finger at me whenever they had problem and I was spending increasing amount of time diagnosing their hardware problems.

1980 was real tipping point as hardware tradeoff switched from system bottleneck to I/O bottleck (my claims that relative system disk throughput had declined by order or magnitude, systems got 40-50 times faster, disks got 3-5 faster).

getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
original SQL/relational RDBMS, System/R
https://www.garlic.com/~lynn/submain.html#systemr

some posts mentioning national grocery, PDS multi-track search
https://www.garlic.com/~lynn/2023g.html#60 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023c.html#96 Fortran
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2019b.html#15 Tandem Memo

--
virtualization experience starting Jan1968, online at home since Mar1970

Architectural implications of locate mode I/O

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architectural implications of locate mode I/O
Newsgroups: comp.arch
Date: Sat, 06 Jul 2024 07:34:43 -1000
"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
As you posted below, the whole PDS search stuff could easily be a disaster. Even with moremodest sized PDSs, it was inefficient has hell. Doing a linear search, and worse yet, doing it on a device that was slower than main memory, and tying up the disk controller and channel to do it. It wasn't even sort of addressed until the early 1990s with the "fast PDS search" feature in the 3990 controller. The searches still took the same amount of elapsed time, but the key field comparison was done in the controller and it only returned status when it found a match (or end of the extent), which freed up the channel. Things would have been much better if they simply used some sort of "table of contents" or index at the start of the PDS, read it in, then did an in memory search. Even on small memory machines, if you had a small sized index block and used something like a B-tree of them, it would have been faster.

re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#54 Architectural implications of locate mode I/O

trivia: I've also mentioned in 1980 using HYPERChannel to implement channel extender ... as side-effect also reduced channel busy on the "real" channels ... another side-effect would get calls from ibm branches that had customers also doing stuff with HYPERChannel including NCAR that did supercomputer "network access storage" (NAS) that as side-effect eliminated channel busy for CKD DASD "search" operations in 1st half of 80s (a decade before 3990)

for A510 channel emulator , the channel program was downloaded into the A510 and executed from there. NCAR got a upgrade for A515 which also allowed the search argument to be included in the download ... so mainframe real memory and channels weren't involved (although the dasd controller was still involved). It also supported 3rd party transfer.

Supercomputer would send request over HYPERChannel to mainframe server. The mainframe would download the channel program into a A515 and return the A515 and channel program "handle" to the supercomputer. The supercomputer would send a request to that A515 to execute the specified channel program (and data would transfer directly between the disk and the supercomputer w/o passing through the mainframe).

Then became involved with HIPPI (open cray channel standard pushed by LANL) and FCS (open fibre channel pushed by LLNL) also being able to do 3rd party transfers ... along with have LLNL's LINCS/Unitree and NCAR's "Mesa Archival" ported to IBM's HA/CMP product we were doing.

other trivia: as also mentioned System/R (original SQL/relational RDBMS implementation) used cacheable indexes ... not linear searches.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
post mentioning DASD, CKD, FBA, mutli-track searches
https://www.garlic.com/~lynn/submain.html#dasd
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

--
virtualization experience starting Jan1968, online at home since Mar1970

Free and Open Source Software-and Other Market Failures

From: Lynn Wheeler <lynn@garlic.com>
Subject: Free and Open Source Software-and Other Market Failures
Date: 06 Jul, 2024
Blog: Facebook
Free and Open Source Software-and Other Market Failures
https://cacm.acm.org/practice/free-and-open-source-software-and-other-market-failures/
IBM, the unimaginably huge monopoly, was so big that "nobody got fired for buying IBM," and it was IBM's way or no way. Then there was everybody else, sometimes referred to as "the seven dwarfs," and they all wanted to be IBM instead of IBM. None of them realized that when customers asked for "anything but IBM," it was not about the letters on the nameplate but about the abuse of power.

... snip ...

CEO Learson tried (& failed) to block the bureaucrats, careerists & MBAs from destroying Watson legacy and culture
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Seymour Cray and the Dawn of Supercomputing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Seymour Cray and the Dawn of Supercomputing
Date: 06 Jul, 2024
Blog: Facebook
Seymour Cray and the Dawn of Supercomputing
https://www.allaboutcircuits.com/news/seymour-cray-dawn-of-supercomputing/
Seymour Cray once dreamed of building the fastest computer in the world. In a 30-year span, the "father of supercomputing" accomplished that goal several times over.

... snip ...

HA/CMP posts ... some mentioning cluster scale-up
https://www.garlic.com/~lynn/subtopic.html#hacmp

then there is thornton ... some posts mentioning thornton, cray and cdc6000:
https://www.garlic.com/~lynn/2023e.html#109 DataTree, UniTree, Mesa Archival
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022b.html#98 CDC6000
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2015h.html#10 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2013g.html#6 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013g.html#3 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2012o.html#27 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012m.html#11 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2009c.html#12 Assembler Question
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances
https://www.garlic.com/~lynn/2002i.html#13 CDC6600 - just how powerful a machine was it?

--
virtualization experience starting Jan1968, online at home since Mar1970

Architectural implications of locate mode I/O and channels

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architectural implications of locate mode I/O and channels
Newsgroups: comp.arch
Date: Sun, 07 Jul 2024 07:30:58 -1000
"Paul A. Clayton" <paaronclayton@gmail.com> writes:
In theory, non-practicing patent licensors seem to make sense, similar to ARM not making chips, but when the cost and risk to the single patent holder is disproportionately small, patent trolling can be profitable. (I suspect only part of the disparity comes from not practicing; the U.S. legal system has significant weaknesses and actual expertise is not easily communicated. My father, who worked for AT&T, mentioned a lawyer who repeated sued AT&T who settled out of court because such was cheaper than defending even against a claim without basis.)

re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#54 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#55 Architectural implications of locate mode I/O

in 90s, there was semantic analysis of patents and found that something like 30% of "computer/technology" patents were filed in other categories using ambiguous wording ... "submarine" patents (unlikely to be found in normal patent search) ... waiting for somebody that was making lots of money that could be sued for patent infringement.

other trivia: around turn of century was doing some security chip work for financial institution and was asked to work with patent boutique legal firm, eventually had 50 draft (all assigned) patents and the legal firm predicted that there would be over hundred before done ... some executive looked at the filing costs and directed all the claims be repackaged as nine patents. then the patent office came back and said they were getting tired of these humongous patents where the filing fee didn't even cover the cost of reading the claims ... and directed the claims be repackaged as at least a couple dozen patents.

some related info
https://www.garlic.com/~lynn/x959.html
https://www.garlic.com/~lynn/subpubkey.html#x959

--
virtualization experience starting Jan1968, online at home since Mar1970

Too-Big-To-Fail Money Laundering

From: Lynn Wheeler <lynn@garlic.com>
Subject: Too-Big-To-Fail Money Laundering
Date: 07 Jul, 2024
Blog: Facebook
After the economic mess implosion at the end of 1st decade of the century ... started having too big to fail/prosecute/jail" being found money laundering for drug cartels (stories about the money enabling large purchases of military grade equipment and responsible for big uptic in violence on both sides of the border) ... and being slapped with "deferred prosecution"
https://en.wikipedia.org/wiki/Deferred_prosecution

if they promised to stop ... or they would be prosecuted ... however some had repeated cases and earlier "deferred prosecution" instances just being ignored.

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
too big to fail (prosecute/jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering

past posts mentioning deferred prosecution:
https://www.garlic.com/~lynn/2024b.html#77 Mexican cartel sending people across border with cash to buy these weapons
https://www.garlic.com/~lynn/2024.html#58 Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again
https://www.garlic.com/~lynn/2024.html#19 Huge Number of Migrants Highlights Border Crisis
https://www.garlic.com/~lynn/2022h.html#89 As US-style corporate leniency deals for bribery and corruption go global, repeat offenders are on the rise
https://www.garlic.com/~lynn/2021k.html#73 Wall Street Has Deployed a Dirty Tricks Playbook Against Whistleblowers for Decades, Now the Secrets Are Spilling Out
https://www.garlic.com/~lynn/2018e.html#111 Pigs Want To Feed at the Trough Again: Bernanke, Geithner and Paulson Use Crisis Anniversary to Ask for More Bailout Powers
https://www.garlic.com/~lynn/2018d.html#60 Dirty Money, Shiny Architecture
https://www.garlic.com/~lynn/2017h.html#79 Feds widen hunt for dirty money in Miami real estate
https://www.garlic.com/~lynn/2017h.html#56 Feds WIMP
https://www.garlic.com/~lynn/2017b.html#39 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017b.html#13 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#45 Western Union Admits Anti-Money Laundering and Consumer Fraud Violations, Forfeits $586 Million in Settlement with Justice Department and Federal Trade Commission
https://www.garlic.com/~lynn/2016e.html#109 Why Aren't Any Bankers in Prison for Causing the Financial Crisis?
https://www.garlic.com/~lynn/2016c.html#99 Why Is the Obama Administration Trying to Keep 11,000 Documents Sealed?
https://www.garlic.com/~lynn/2016c.html#41 Qbasic
https://www.garlic.com/~lynn/2016c.html#29 Qbasic
https://www.garlic.com/~lynn/2016b.html#73 Qbasic
https://www.garlic.com/~lynn/2016b.html#0 Thanks Obama
https://www.garlic.com/~lynn/2016.html#36 I Feel Old
https://www.garlic.com/~lynn/2016.html#10 25 Years: How the Web began
https://www.garlic.com/~lynn/2015h.html#65 Economic Mess
https://www.garlic.com/~lynn/2015h.html#47 rationality
https://www.garlic.com/~lynn/2015h.html#44 rationality
https://www.garlic.com/~lynn/2015h.html#31 Talk of Criminally Prosecuting Corporations Up, Actual Prosecutions Down
https://www.garlic.com/~lynn/2015f.html#61 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#57 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#37 LIBOR: History's Largest Financial Crime that the WSJ and NYT Would Like You to Forget
https://www.garlic.com/~lynn/2015f.html#36 Eric Holder, Wall Street Double Agent, Comes in From the Cold
https://www.garlic.com/~lynn/2015e.html#47 Do we REALLY NEED all this regulatory oversight?
https://www.garlic.com/~lynn/2015e.html#44 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#23 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015d.html#80 Greedy Banks Nailed With $5 BILLION+ Fine For Fraud And Corruption
https://www.garlic.com/~lynn/2014i.html#10 Instead of focusing on big fines, law enforcement should seek long prison terms for the responsible executives

--
virtualization experience starting Jan1968, online at home since Mar1970

16June1911, IBM Incorporation Day

From: Lynn Wheeler <lynn@garlic.com>
Subject: 16June1911, IBM Incorporation Day
Date: 07 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#53 16June1911, IBM Incorporation Day

re: SNA/TCPIP; 80s, the communication group was fighting off client/server and distributed computing, trying to preserve their dumb terminal paradigm ... late 80s, a senior disk engineer got a talk scheduled at an annual, world-wide, internal, communication group conference, supposedly on 3174 performance ... but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing a drop in disk sales with data fleeing datacenters to more distributed computing friendly platforms. They had come up with a number of solutions but were constantly vetoed by the communication group (with their corporate strategic ownership of everything that crossed datacenters walls) ... communication group datacenter stranglehold wasn't just disks and a couple years later IBM has one of the largest losses in the history of US companies.

As partial work-around, senior disk division executive was investing in distributed computing startups that would use IBM disks ... and would periodically ask us to drop by his investments to see if we could provide any help.

other trivia: I had HSDT project from early 80s (T1 and faster computer links, both terrestrial and satellite), including work with NSF director and was suppose to get $20M from NSF to interconnect the NSF Supercomputing Centers; then congress cuts the budget, some other things happened and finally an RFP is released (in part based on what we already had running), Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.
.... snip ...

... IBM internal politics was not allowing us to bid (possibly contributing was being blamed for online computer conferencing). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87)

co-worker at CSC and responsible for cambridge wide-area network (that evolves into the corporate internal network, originally non-SNA, technology also used for the corporate sponsored univ BITNET) ... from one of the inventors of GML at the science center in the 60s
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

... we later transfer to SJR
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

communication group fighting off client/server & distributed computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (& earn) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Architectural implications of locate mode I/O

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architectural implications of locate mode I/O
Newsgroups: comp.arch
Date: Mon, 08 Jul 2024 08:01:22 -1000
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Did IMS have a locate mode as well?

re:
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#54 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#55 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#58 Architectural implications of locate mode I/O and channels

channel programs were built by filesystem library running as part of application or directly by applicatiion code .... and then executes system call, EXCP/SVC0 to invoke the channel program. With MVS and virtual memory its in application virtual address space.

QSAM the library code data is to/from library buffers and then either copies to application buffers or "locate" mode passing pointers in QSAM buffers.

For IMS has data buffer cache directly managed (aware of whether data record is aleady in cache or must be read ... and/or is changed in cache and must be written) ... also transaction log)

With transition to virtual memory, the channel programs passed to EXCP/SVC0 now had virtual addresses and channel architecture required real addresses ... so EXCP/SVC0 required making a copy of the passed channel programs replacing virtual addresses with real addresses (as well as pinning the associated virtual pages until I/O completes, code to create channel program copies with real addresses and managing virtual page pin/unpin initially done copy crafting virtual machine CP67 "CCWTRANS" into EXCP).

for priviliged apps that had fixed/pinned virtual pages for I/O buffers, a new EXCPVR interface was built ... effectively the original EXCP w/o (CCWTRANS) channel program copying (and virtual page pinning/unpinning).

IMS "OSAM" and "VSAM" (OSAM may use QSAM
https://www.ibm.com/docs/en/ims/15.3.0?topic=sets-using-osam-as-access-method
IMS communicates with OSAM using OPEN, CLOSE, READ, and WRITE macros. In turn, OSAM communicates with the I/O supervisor by using the I/O driver interface.

Data sets

An OSAM data set can be read by using either the BSAM or QSAM access method. ... snip ...

IMS Performance and Tuning guide page167
https://www.redbooks.ibm.com/redbooks/pdfs/sg247324.pdf
• EXCPVR=0 Prevents page fixing of the OSAM buffer pool. This is the correct choice these days
... snip ..

START_Input/Output
https://en.wikipedia.org/wiki/Start_Input/Output
EXCPVR
https://en.wikipedia.org/wiki/Execute_Channel_Program_in_Real_Storage

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65, 360/67, 360/75 750ns memory

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65, 360/67, 360/75 750ns memory
Date: 08 Jul, 2024
Blog: Facebook
virtual memory 360 SMP original pubs had up to four processors ... but only two processor SMP were made (except special 3-processor for manned orbital lab, "MOL" project) ... can see single processor and two processor specs in 360/67 func spec at bitsavers
https://www.bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar.pdf

... control register layout still has provisions for four-processor configuration.

360/65 SMP was just two single processor machines (each with their own dedicated channels, SMP channel configuration simulated with two-channel controllers at same address on the two machines) sharing real storage (same for 370 SMP). 360/67 "duplex" had multi-ported memory where CPU transfers and I/O transfers could go on concurrently and allowed both processors to access all channels.

Charlie was working on fine-grain CP67 SMP locking at the science center when he invented compare-and-swap (name chosen because "CAS" were his initials). In meetings with the POK 370 architecture owners trying to get CAS added, we were told that the POK favorite son operating system people (MVT) said (360) test-and-set was sufficient; and if CAS was to be justified, uses other than kernel locking (single spin-lock) were needed ... thus were born the examples for multi-threaded applications (as alternative to kernel locking system calls).

In the morph of CP67->VM370, lots of features were simplified (and/or dropped, including SMP support). In 1974, I started added lots of stuff back into VM370R2-base for my internal "CSC/VM" (one of my hobbies after joining IBM was enhanced production operating systems for internal datacenters, and the internal, world-wide sales&marketing support HONE systems were long time customers) ... which included kernel re-org for multiprocessor operation (but not SMP itself). Then in 1975, I upgraded to VM370R3-base with the inclusion of SMP support, originally for the US consolidated HONE system (up in Palo Alto across the back parking lot from the IBM Palo Science Center); trivia when FACEBOOK 1st moved into silicon valley, it was into new bldg built next door to the former HONE datacenter).

When US HONE datacenters were consolidated in Palo Alto, loosely-coupled, shared DASD, "single-system image", with load-balancing and fall-over support across the complex ... then with SMP support they could add a 2nd processor to each system (largest IBM "single-system image" complex in the world). With some slight of hand, I was able to double the throughput of each system (at a time when POK MVT/SVS/MVS documentation was only claiming two-processor SMP having 1.2-1.5 times the throughput of single processor).

Future System was cratering (during FS, internal politics was killing off 370 efforts) and there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

and I was con'ed into helping with a 16-processor SMP 370 effort and we shanghaied the 3033 processor engineers into working on it in their spare time (lot more interesting that remapping 168 logic to 20% faster chips. Everybody thought it was great until somebody told the head of POK that it could be decades before MVS had (effective) 16-way support (start at 1.2-1.5 throughput for 2-way and overhead/contention increasing for each processor added). Some of us were then invited to never visit POK again and the 3033 processor engineers directed "heads down" only on 3033. Note POK doesn't ship a 16-processor SMP machine until after the turn of the century.

trivia: 360/67 was originally, officially for tss/360 ... but tss/360 was enormously bloated and poor performance. Single processor 360/67 only had max 1mbyte real storage (while two processor could have 2mbytes) ... most of the 1mbyte taken up by tss/360 kernel. TSS/360 published that a two processor SMP had 3.8times the throughput of a single processor (doesn't mention single processor heavily page thrashed ... and while two processor still wasn't really good, the 2nd mbyte allowed less page thrashing ... getting 3.8times throughput of single processor. Melinda has some history about the early 360/67 and TSS360/CP67 conflict (also at one time TSS/360 project had something like 1200 people while CP67/CMS has 12).
https://www.leeandmelindavarian.com/Melinda#VMHist

SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65, 360/67, 360/75 750ns memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65, 360/67, 360/75 750ns memory
Date: 08 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory

I had taken a two credit hr intro to fortran/computer and at the end of semester I was hired to rewrite 1401 MPIO for 360/30. Univ. was getting 360/67 for tss/360 to replace 709/1401 and got a 360/30 temporarily replacing 1401 until 360/67 was available. I was given a pile of software&hardware manuals (univ shutdown datacenter on weekends and I got it dedicated although 48hrs w/o sleep made monday classes hard) and got to design&implement my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program. The 360/67 arrived within a year of taking intro class, and I was hired fulltime responsible for OS/360 (tss/360 was never satisfactory), so 360/67 ran as 360/65.

Then three people from CSC came out to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs) and I mostly played with it in my dedicated weekend window. Before I graduate I was hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in an independent business unit). I think Renton largest datacenter in the world (couple hundred million in 360s), 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I wasn't doing other stuff). When I graduate, I join IBM CSC instead of staying in the CFO office.

While I was at Boeing, the Boeing Huntsville two-processor, duplex/SMP 360/67 was brought up to Seattle. It had (also) originally been ordered for TSS/360 but was configured as two 360/65 systems running MVT. However it was intended for CAD/CAM with lots of 2250 graphic displays
https://en.wikipedia.org/wiki/IBM_2250

and ran into the horrible MVT storage management problem, aggravated with long running CAD/CAM jobs ... and had modified MVTR13 to run in virtual memory mode with no paging (sort of precursor to VS2/SVS) ... just using fiddling of virtual memory as partial countermeasure.

Note a little over a decade ago, a customer asked me to track down decision to add virtual memory to all 370s. Found a staff to executive making decision. Basically MVT storage management was so bad that region sizes typically had to be specified four times larger than used ... as a result standard 1mbyte 370/165 only had space for running four concurrent regions, insufficient to keep the system busy and justified. Putting MVT in 16mbyte virtual address space, VS2/SVS (sort of like running MVT in 16mbyte virtual machine) allowed number of concurrently running regions to be increase by factor of four (up to 15, aka 2kbyte 4bit storage protect keys), with little or no paging. Later had to move to VS2/MVS, giving each "region", its own private address space (for protection), as work around to the storage protect key limit of 15.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning Univ MPIO work, Boeing CFO work, Boeing Hunstville two processor 360/67
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65, 360/67, 360/75 750ns memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65, 360/67, 360/75 750ns memory
Date: 08 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory

Trivia: In the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM ... responsible for redoing F-15 design (cutting weight in half) and YF-16 & YF-17 (that become the F16 and F18) ... and also helped Pierre Sprey with A-10.
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Relaxed_stability_and_fly-by-wire

The F-16 is the first production fighter aircraft intentionally designed to be slightly aerodynamically unstable, also known as "relaxed static stability" (RSS), to improve manoeuvrability. Most aircraft are designed with positive static stability, which induces aircraft to return to straight and level flight attitude if the pilot releases the controls; this reduces manoeuvrability as the inherent stability has to be overcome. Aircraft with negative stability are designed to deviate from controlled flight and thus be more maneuverable. At supersonic speeds the F-16 gains stability (eventually positive) due to aerodynamic changes.

... snip ...

other trivia: one of Boyd stories was about being very vocal that electronics across the trail wouldn't work and possibly as punishment, he is put in command of spook base (about the same time I was at Boeing), one of his biographies has "spook base" a $2.5B "windfall" for IBM (aka ten times Renton).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
https://en.wikipedia.org/wiki/Operation_Igloo_White

other Boyd mention:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Former commandant (passed earlier this spring) would sponsor us for Boyd conferences at Marine Corp Univ.

Boyd posts and URLs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65, 360/67, 360/75 750ns memory

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65, 360/67, 360/75 750ns memory
Date: 08 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory

note: Amdahl had won the battle to make ACS 360 compatible and then ACS/360 was killed because executives thot that it would advance the state-of-the-art too fast and IBM might loose control off the market ... also mentions design for full-speed, 1/3rd speed, and 1/9th speed (he leaves IBM shortly later):
https://people.computing.clemson.edu/~mark/acs_end.html

trivia: my wife was in gburg JES group and one of the catchers for ASP/JES3. Then was con'ed into going to POK to be responsible for "loosely-coupled (shared dasd) architecture (peer-coupled shared data). She didn't remain long because 1) constant battles with communication group trying to force her into using SNA/VTAM for loosely-coupled operation and 2) little uptake (until much later with SYSPLEX and Parallel SYSPLEX) except for IMS hot-standby. She has story about asking Vern Watts who he would ask permission to do hot-standby ... and he replies nobody, he would just tell them after he was all done.

peer-coupled shared data posts
https://www.garlic.com/~lynn/submain.html#shareddata
HASP/ASP, JES/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

360/65, 360/67, 360/75 750ns memory

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/65, 360/67, 360/75 750ns memory
Date: 08 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#64 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#65 360/65, 360/67, 360/75 750ns memory

trivia: while Boeing Renton datacenter was lots of 360/65s, there was one 360/75 that had black rope around its perimeter area, when running classified program, there would be guards at the corner and black velvet draped over the lights of the 75 panel and 1403 areas where printed paper was exposed. 360/75 funcchars at bitsaver
https://www.bitsavers.org/pdf/ibm/360/functional_characteristics/GA27-2719-2_360-67_funcChar.pdf

other trivia: I was blamed for online computer conferencing in the late 70s and early 80s. It really took off the spring of 1981 when I distributed trip report to Jim Gray at Tandem. When the corporate executive committee was told there was something of uproar (folklore 5of6 wanted to fire me), with some task forces that resulted in official online conferencing software and officially sanctioned moderated forums. One of the observations

Date: 04/23/81 09:57:42
To: wheeler

your ramblings concerning the corp(se?) showed up in my reader yesterday. like all good net people, i passed them along to 3 other people. like rabbits interesting things seem to multiply on the net. many of us here in pok experience the sort of feelings your mail seems so burdened by: the company, from our point of view, is out of control. i think the word will reach higher only when the almighty $$$ impact starts to hit. but maybe it never will. its hard to imagine one stuffed company president saying to another (our) stuffed company president i think i'll buy from those inovative freaks down the street. '(i am not defending the mess that surrounds us, just trying to understand why only some of us seem to see it).

bob tomasulo and dave anderson, the two poeple responsible for the model 91 and the (incredible but killed) hawk project, just left pok for the new stc computer company. management reaction: when dave told them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they tried to keep bob by telling him he shouldn't go (the reward system in pok could be a subject of long correspondence). when he left, the management position was 'he wasn't doing anything anyway. '

in some sense true. but we haven't built an interesting high-speed machine in 10 years. look at the 85/165/168/3033/trout. all the same machine with treaks here and there. and the hordes continue to sweep in with faster and faster machines. true, endicott plans to bring the low/middle into the current high-end arena, but then where is the high-end product development?


... snip ... top of post, old email index

note: trout/3090 engineers thought they were doing much better than 3081 (warmed over FS technology from the early 70s used in the early 80s, not part of 85/165/168/3033/trout).
http://www.jfsowa.com/computer/memo125.htm
end of ACS/360 has references to features that show up in ES/9000 more than two decades later
https://people.cs.clemson.edu/~mark/acs_end.html
Tomasulo algorithm
https://en.wikipedia.org/wiki/Tomasulo%27s_algorithm
some discussion of online computer conferencing
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

shortly after joining IBM, I was asked to help with hyper-/multi-threading the 370/195 (aka simulating two processor SMP, patent mentioned in the "acs_end" web page, "Sidebar: Multithreading"). 370/195 didn't have branch prediction and/or speculative execution, so conditional branches drained the pipeline and most codes ran at half 195 rated speed. Going to two instruction streams (two simulated processors) theoretically would keep the machine running at top speed (modulo MVT two-processor SMP overhead only have 1.2-1.5 throughput of single processor). Effort got can'ed when it was decided to add virtual memory to all 370s (and it was decided it wasn't worth it to retrofit virtual memory to 370/195). 360/195 funct char (for 370/195 they add the basic new 370 instructions and some instruction retry)
https://www.bitsavers.org/pdf/ibm/360/functional_characteristics/GA22-6943-1_360-195_Functional_Characteristics_197008.pdf

SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
FS posts
https://www.garlic.com/~lynn/submain.html#futuresys
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

A Timeline of Mainframe Innovation

From: Lynn Wheeler <lynn@garlic.com>
Subject: A Timeline of Mainframe Innovation
Date: 09 Jul, 2024
Blog: Facebook
A Timeline of Mainframe Innovation
https://interactive.techchannel.com/ibm-z-impact/timeline-of-mainframe-innovation-125ZV-736XK.html

System/R was the original sql/relational implementation and we managed to do tech transfer to Endicott for SQL/DS, while the company was preoccupied with "EAGLE" (the next great DBMS follow-on to IMS). When "EAGLE" implodes, there is request for how fast can System/R be ported to MVS, which is eventually released as DB2 (originally for decision/support "only").

1980, STL (since renamed SVL) was bursting at the seams and transferring 300 people from the IMS group to offsite bldg (along with their 3270 terminals) with dataprocessing back to STL datacenter. They tried remote 3270 but found the human factors totally unacceptable. I was con'ed into doing channel-extender support for them, allow channel-attached 3270 controllers at the offsite bldg, with no perceived difference in human factors. They had previously distributed 3270 controllers across processor channels with disks. The channel-extended significantly cutting the channel busy time (for the same amount of 3270 traffic), resulting in increasing system throughput by 10-15%. STL then considered putting all 3270 channel-attached controllers (even those inhouse) on channel-extender (for improvement in system throughput). An attempt to release my support got vetoed by a group in POK (playing with some serial stuff, afraid it would make it harder to announce their stuff).

In 1988, IBM branch office asks if I could help LLNL (national lab) with getting some serial stuff they were playing with, standardized (including some stuff I had done in 1980) ... which quickly becomes fibre-channel standard ("FCS", initially 1gbit, full-duplex, aggregate 200mbytes/sec). POK eventually gets their stuff announced w/ES9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers start playing with FCS and define a heavy-weight protocol that drastically cuts throughput that eventually is announced as FICON (running over FCS). Most recent public benchmark I can find is z196 "Peak I/O" that got 2M IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two native FCS getting higher throughput than 104 FICON). Also IBM pubs recommends SAPs (system assist processors that do actual I/O) CPU limit to 70% (aka 1.5M IOPS).

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and/or FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

some recent mainframe posts
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#68 IBM Hardware Stories
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2022h.html#113 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#71 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#111 Financial longevity that redhat gives IBM
https://www.garlic.com/~lynn/2022c.html#67 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#54 IBM Z16 Mainframe
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16

--
virtualization experience starting Jan1968, online at home since Mar1970

ARPANET & IBM Internal Network

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET & IBM Internal Network
Date: 10 Jul, 2024
Blog: Facebook
Note the IBM internal network was passing 200 ... base was built off the 60s science center CP67 wide-area network (RSCS/VNET) and then the growing number of VM370 (RSCS/VNET). San Jose and some other locations had wanted to start bldg "SUN" network out of HASP/JES2 systems ... and a RSCS/VNET line-driver was written that simulated HASP protocol. The problem was that the HASP support (still had "TUCC" in cols. 68-71 and) used spare entries in the HASP psuedo device 255 entry table, usually around 160-180 ... and continued with JES2. The HASP/JES2 implementation also trashed any traffic where there wasn't an entry for either traffic origin or destination in the local table. Since the internal network was passing 200 nodes, the HASP/JES2 systems had to be carefully restricted to edge/boundary nodes.

By the time of 1Jan1983 ARPANET cut-over from IMP/HOST protocol to internetworking protocol there was approx. 100 IMP nodes and 255 mainframe hosts ... while the internal network was rapidly approaching 1000. Old archive post with misc 1983 network update samples and a list of company locations that added one or more nodes during 1983.
https://www.garlic.com/~lynn/2006k.html#8

The Cambridge internal network technology was also used for the corporate sponsored univ. BITNET, for a time also larger than internet
https://en.wikipedia.org/wiki/BITNET
in 89, merges with CSNET
https://en.wikipedia.org/wiki/CSNET
to form CREN:
https://en.wikipedia.org/wiki/Corporation_for_Research_and_Educational_Networking

History/comment from one of the Cambridge people that had invented GML, but job was to promote the science center wide-area network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

GML drift, some of the MIT CTSS/7094 people had gone to the 5th flr for MULTICS and others had gone to the IBM Science Center on 4th flr. CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
was rewritten for CP67/CMS as "SCRIPT" and after GML was invented in 1969, GML tag processing was added to "SCRIPT".

JES2 was finally updated to max of 999 nodes, but it was after the internal network had passed 1000 (and JES2 still trashed traffic if origin or destination nodes were in local table). The other JES2 shortcoming was that network fields and job control fields were intermixed and traffic from JES2 nodes at different release levels could crash destination JES2, bringing down MVS (requiring manual re-ipl). As a result the VM370 emulated HASP driver acquired family of changes that could reorganize HASP/JES fields to correspond to what directly connected JES system required. There was the infamous case of Hursley MVS systems crashing by San Jose MVS systems and the Hursley VM370 staff was blamed (they hadn't gotten notice of latest VM370 emulated JES driver to account for San Jose JES2 header format changes)

Internal network was larger than ARPANET/INTERNET from just about the beginning until sometime mid/late 80s ... then in large part IBM communication group forcing internal network to convert to SNA and their fierce battle trying to block client/server and distributed computing (internet start to see PCs and workstations as nodes, while internally they were restricted to terminal emulation).

Some of us (including person response for CP67 wide-area network), in the 2nd half of the 70s, tranfer from Cambridge to San Jose.
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

A network distributed development project started in 1970 between Endicott and Cambridge (after decision to add virtual memory to all 370s) ... to modify CP67 (running on 360/67) to emulate 370 virtual memory machines ... and then do modified CP67 to run on 370 virtual memory machines. "CP67I" was running in "CP67H" 370 virtual machines regularly a year before the first engineering 370 with virtual machine was operational (in fact Endicott used ipl'ing CP67I as a test for the first engineering 370 virtual memory hardware). In part because non-IBM students, staff, professors from Boston area institutions were using the Cambridge system, my CP67L/CSCVM ran on the real hardware and CP67H ran in a 360/67 virtual machine (isolated from other users on the Cambridge system), with CP67I running in a CP67H 370 virtual machine (and CMS running in a CP67I virtual machine)

Cambridge Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HASP, ASP, JES2, JES3, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
GML, SGML, HTML, etc, posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some recent posts mentioning CP67L, CP67H, CP67I, CP67SJ
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2023g.html#63 CP67 support for 370 virtual memory
https://www.garlic.com/~lynn/2023f.html#47 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#44 IBM 360/65 & 360/67 Multiprocessors
https://www.garlic.com/~lynn/2023d.html#98 IBM DASD, Virtual Memory
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021d.html#39 IBM 370/155
https://www.garlic.com/~lynn/2021c.html#5 Z/VM

--
virtualization experience starting Jan1968, online at home since Mar1970

ARPANET & IBM Internal Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET & IBM Internal Network
Date: 10 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network

RSCS/VNET from the 60s (point-to-point telco links), predating SNA by nearly a decade (our joke SNA wasn't a System, wasn't a Network, and wasn't an Architecture, about SNA time-frame, my wife was co-author of AWP39, Peer-to-Peer Networking Architecture, they had to qualify it with "Peer-to-Peer" since the communication group had misused "network" for SNA). RSCS/VNET did use the CP67 (and then VM370) spool file system with an optimized 4k block interface for stating data ... resulting in aggregate throughput limit of about 6-8 4k records/sec (or 32kbyes/320kbits, less if spool file system was heavily loaded with use by other users). Early 80s I got HSDT with T1 and faster computer links (both terrestrial and satellite) and I needed 3-4mbits/sec per T1 link (300-400kbytes, 75-100 4k/sec).

I did a rewrite of VM370 spool file system in VS/Pascal running in virtual address space, provided RSCS/VNET with asynchronous interface (instead of synchronous) support contiguous allocation, multi-block transfer, read-ahead and write-behind and super fast checkpoint recovery) allowing supporting of multiple T1 (and faster) links. I was scheduled to give presentation (spring 1987) on upgrading the backbone network hubs at corporate backbone network meeting when I got email saying the communication group had restricted attendance to managers only while they brow-beat the company into converting the internal network to SNA (SNA products at the time capped at 56kbit links), claims included that if it wasn't upgraded to SNA, internal email PROFS would stop working (which had been running fine for nearly a decade)

The communication group was also trying to suppress release of mainframe TCP/IP suport and when the lost, they change strategy and said that since they had corporate straegic responsibility for everything that crossed datacenter walls, it had to be release through them. When shipped got aggregate of 44kbytes/sec using nearly whole 3090 processors. I then did support for RFC1044 and in some turning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executing).

I was also working with NSF Director and HSDT was suppose to get $20M to interconnect the NSF supercomputer centers, then congress cuts the budget, some other things happen and finally an RFP is released (in part based on what we already had running). Preliminary agenda 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

... IBM internal politics was not allowing us to bid (possibly contributing was being blamed for online computer conferencing). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87)

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

some posts mentioning spool file system rewrite in vs/pascal
https://www.garlic.com/~lynn/2023g.html#68 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory
https://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
https://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
https://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)

some recent posts mentioning awp39:
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024b.html#101 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#30 ACP/TPF
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023f.html#40 Rise and Fall of IBM
https://www.garlic.com/~lynn/2023e.html#41 Systems Network Architecture
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#43 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2021h.html#90 IBM Internal network

--
virtualization experience starting Jan1968, online at home since Mar1970

ARPANET & IBM Internal Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET & IBM Internal Network
Date: 10 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#69 ARPANET & IBM Internal Network

NSA DOCKMASTER and public email
https://www.multicians.org/mgn.html#NSA
https://www.multicians.org/site-dockmaster.html

after leaving IBM, I was rep to the ANSI financial industry standards body including dealing with some amount of crypto and there were NIST and other gov. agencies with reps. I got asked to be on panel in trusted computing track at IDF ... I semi-facetiously mentioned aggressively cost reducing a $500 security mil-spec chip to less than dollar while improving security ... gone 404 but live on at wayback machine (the guy running TPM (trusted computing module) chip effort was in the front row and commented that I was able to do it because I didn't have a committee of 200 people helping me with the design)
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

I also got asked to give presentation to former agency director then at BAH ... and net was they couldn't see how to make any profit. Trivia: BAH had bought the former (IBM) SBS bldg in Tysons, gutted the bldg and built a 2nd identical bldg with lobby between the two bldgs. The former director's office was in the original (SBS) bldg and included the area where my wife's office had been (she returned to IBM after SBS was dissolved).

... another case where people were complaining to me about couldn't do something because it cost too much and I aggressively cost reduce and then they would complain they couldn't do something because there was no profit.

I had embedded the crypto in the silicon of the chip and just before I went for EAL5-high (or 6) evaluation, the gov. pulled the crypto evaluation criteria and I had to settle for a chip EAL4-high evaluation.

some security chip work refs
https://www.garlic.com/~lynn/x959.html
some financial standard posts
https://www.garlic.com/~lynn/subpubkey.html#x959
some assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance

posts mentioning EAL-4 evalucation
https://www.garlic.com/~lynn/2023f.html#2 Bounty offered for secret NSA seeds behind NIST elliptic curves algo
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2017e.html#76 Typesetting
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2013l.html#55 "NSA foils much internet encryption"
https://www.garlic.com/~lynn/2013k.html#88 NSA and crytanalysis
https://www.garlic.com/~lynn/2012m.html#7 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2010p.html#0 CARD AUTHENTICATION TECHNOLOGY - Embedded keypad on Card - Is this the future
https://www.garlic.com/~lynn/2010o.html#84 CARD AUTHENTICATION TECHNOLOGY - Embedded keypad on Card - Is this the future
https://www.garlic.com/~lynn/2010o.html#56 The Credit Card Criminals Are Getting Crafty
https://www.garlic.com/~lynn/2010m.html#57 Has there been a change in US banking regulations recently
https://www.garlic.com/~lynn/2010f.html#83 Notes on two presentations by Gordon Bell ca. 1998
https://www.garlic.com/~lynn/2010f.html#26 Should the USA Implement EMV?
https://www.garlic.com/~lynn/2009q.html#40 Crypto dongles to secure online transactions
https://www.garlic.com/~lynn/2009n.html#12 33 Years In IT/Security/Audit
https://www.garlic.com/~lynn/2009n.html#7 Some companies are selling the idea that you can use just a (prox) physical access badge (single factor) for logical access as acceptable
https://www.garlic.com/~lynn/2008q.html#64 EAL5 Certification for z10 Enterprise Class Server
https://www.garlic.com/~lynn/2008q.html#63 EAL5 Certification for z10 Enterprise Class Server
https://www.garlic.com/~lynn/2008e.html#62 Any benefit to programming a RISC processor by hand?
https://www.garlic.com/~lynn/2008b.html#13 Education ranking
https://www.garlic.com/~lynn/2007u.html#11 Public Computers
https://www.garlic.com/~lynn/2007u.html#5 Public Computers
https://www.garlic.com/~lynn/2007q.html#34 what does xp do when system is copying
https://www.garlic.com/~lynn/2007l.html#39 My Dream PC -- Chip-Based
https://www.garlic.com/~lynn/2007b.html#47 newbie need help (ECC and wireless)
https://www.garlic.com/~lynn/2007b.html#30 How many 36-bit Unix ports in the old days?
https://www.garlic.com/~lynn/2004m.html#41 EAL5
https://www.garlic.com/~lynn/2004j.html#2 Authenticated Public Key Exchange without Digital Certificates?
https://www.garlic.com/~lynn/2002m.html#72 Whatever happened to C2 "Orange Book" Windows security?
https://www.garlic.com/~lynn/2002m.html#44 Beware, Intel to embed digital certificates in Banias
https://www.garlic.com/~lynn/2002c.html#15 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
https://www.garlic.com/~lynn/aadsm27.htm#37 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm24.htm#26 Naked Payments IV - let's all go naked
https://www.garlic.com/~lynn/aadsm24.htm#23 Use of TPM chip for RNG?
https://www.garlic.com/~lynn/aadsm18.htm#48 Dell to Add Security Chip to PCs
https://www.garlic.com/~lynn/aadsm18.htm#47 Dell to Add Security Chip to PCs
https://www.garlic.com/~lynn/aadsm12.htm#14 Challenge to TCPA/Palladium detractors

--
virtualization experience starting Jan1968, online at home since Mar1970

ARPANET & IBM Internal Network

From: Lynn Wheeler <lynn@garlic.com>
Subject: ARPANET & IBM Internal Network
Date: 11 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#68 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#69 ARPANET & IBM Internal Network
https://www.garlic.com/~lynn/2024d.html#70 ARPANET & IBM Internal Network

trivia: big part of SBS demise was SNA/VTAM window pacing algorithm ... the sat. round-trip delay ... even with low-bandwidth link, met that outstanding packet transmission limit was reached long before returning ACKs started arriving ... so at a minimum very low bandwidth utilization. Also contributed to capping terrestrial links at 56kbit ... even short haul T1 was so fast, that outstanding packet transmission limit was reached (relatively) long before return ACKs started arriving (so very little of bandwidth could be used)

There was 80s incident where they wanted to have double-hop satellite link between STL on the west coast and Hursley (England, up/down between west/east coast and then up/down between east coast/England) being able to use the other's datacenter "off-shift". Initially brought up with RSCS/VNET and no problems. Then (STL executive infused with MVS/JES/SNA) insisted it be switched to JES2 and wouldn't work. It was then switched back to RSCS/VNET and no problems. The executive then decided that RSCS/VNET was too dumb to know it wasn't working (even thought data was flowing fine). The actual problem was the link startup protocol would time-out with the round-trip double-hop delay.

Trivia: HSDT very early went to dynamic adaptive rate-based pacing protocol (as opposed to window based protocol).

Trivia2: 1988 IETF (internet) meeting, there was presentation of (TCP) slow-start window pacing for TCP. However, almost same time, 1988 ACM SIGCOMM article showed how "slow-start window pacing" was non-stable in large, multi-hop internet ... where returning ACKs bunch up at intermediate hop and then released in burst ... resulting in sender sending multiple back-to-back packets ... over running something in the infrastructure.

Trivia3: after communication group presented to corporate executive committee why customers wouldn't want "T1 support" until well into the 90s, they came out with 3737 in the 2nd half of the 80s. It had a boat-load of memory and Motorola 68K with a simulated mini-VTAM, simulating CTCA to local mainframe/VTAM. The 3737 would immediately ACK transmission to local VTAM ... then use non-SNA with remote 3737 ... trying to keep data flowing (although processing was capped around 2mbits/sec aggregate; aka US T1 full-duplex is 3mbits aggregate while EU T1 full-duplex is 4mbits aggregate)

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning 3737, T1, rate-based
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024b.html#56 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2023b.html#53 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2018f.html#109 IBM Token-Ring
https://www.garlic.com/~lynn/2017.html#57 TV Show "Hill Street Blues"
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2011g.html#77 Is the magic and romance killed by Windows (and Linux)?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Winchester" Disk

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Winchester" Disk
Date: 12 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#59 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024c.html#61 IBM "Winchester" Disk

First thin-film head was used with FBA3370 ... followed by CKD3380 (although already transition to everything actually FBA, 3380 records/track formula has record length being rounded up to "cell" size). First 3380 had 20 track spacings between each data track, that was then cut in half for double capacity 3380 (twice number tracks) and then cut again for triple capacity. The father of 801/risc then asks me to see if I can help him with idea for disk "wide-head", handle 18 tracks, format has servo-track and 16 data tracks (read/write 16 data tracks in parallel, following two servo-tracks on each side side of set 16 data tracks). Problem was 50mbytes/sec transfer and mainframe channels still stuck 3mbyte/sec .... this was about same time as I was asked to help LLNL with what becomes FCS ... but POK doesn't become involved and announce FICON until much, much later.

posts mentioning getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
FICON & FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

misc. past posts mentioning cutting spacing between 3380 data tracks and disk "wide-head"
https://www.garlic.com/~lynn/2024b.html#110 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#67 Vintage IBM 3380s
https://www.garlic.com/~lynn/2023e.html#25 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2021f.html#44 IBM Mainframe
https://www.garlic.com/~lynn/2021.html#56 IBM Quota
https://www.garlic.com/~lynn/2019b.html#75 IBM downturn
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2018d.html#17 3390 teardown
https://www.garlic.com/~lynn/2018d.html#12 3390 teardown
https://www.garlic.com/~lynn/2018b.html#111 Didn't we have this some time ago on some SLED disks? Multi-actuator
https://www.garlic.com/~lynn/2017g.html#95 Hard Drives Started Out as Massive Machines That Were Rented by the Month
https://www.garlic.com/~lynn/2017d.html#54 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2014h.html#9 Demonstrating Moore's law

--
virtualization experience starting Jan1968, online at home since Mar1970

GOSIP

From: Lynn Wheeler <lynn@garlic.com>
Subject: GOSIP
Date: 12 Jul, 2024
Blog: Facebook
GOSIP
https://en.wikipedia.org/wiki/Government_Open_Systems_Interconnection_Profile

I was member Greg Chesson's XTP TAB in the 80s, with some gov. agencies involved ... prompting taking it to ISO charted ANSI X3S3.3 (standards body for "level 3 and 4" standards) as HSP. Eventually was told that ISO required that standards had to correspond/follow OSI model; XTP/HSP failed because 1) supported internetworking which doesn't exist in the OSI model, 2) skipped the OSI model level 4/3 (transport/network) interface and 3) went directly to LAN/MAC interface (also doesn't exist in OSI model, sitting somewhere in middle of level 3). Somebody was circulating joke that while IETF required two interoperable implementations to progress in standards process while ISO didn't even require that a standard be implementable.

I was at ACM SIGMOD conference in the early 90s and in large auditorium session, somebody asked what was this x.500/x.509 that was happening in ISO and somebody else (I think one of the panelists up on stage) explained that it was a bunch of networking engineers attempting to reinvent 1960s-era database technology. Disclaimer: at various times in 70s, 80s, and 90s worked on relational (RDBMS) products.

I was on panel up on stage in large ballroom, standing room only, took opportunity to somewhat repeat the SIGMOD reference, 07Sep1998 21st National Information Systems Security Conference
https://csrc.nist.gov/pubs/conference/1998/10/08/proceedings-of-the-21st-nissc-1998/final
also made reference to taking $500 mil-spec security chip, cost reducing to less than dollar while improving security, assurance panel in trusted computing track at 2001 IDF
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13

XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
interop '88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance
original sql/relational RDBMS posts
https://www.garlic.com/~lynn/submain.html#systemr
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some recent posts mentioning GOSIP
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2023f.html#6 Internet
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023.html#104 XTP, OSI & LAN/MAC
https://www.garlic.com/~lynn/2023.html#16 INTEROP 88 Santa Clara
https://www.garlic.com/~lynn/2022g.html#49 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022f.html#20 IETF TCP/IP versus ISO OSI
https://www.garlic.com/~lynn/2021k.html#88 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021e.html#55 SHARE (& GUIDE)
https://www.garlic.com/~lynn/2021d.html#20 The Rise of the Internet
https://www.garlic.com/~lynn/2021d.html#13 The Rise of the Internet
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2019.html#3 Network names

--
virtualization experience starting Jan1968, online at home since Mar1970

Some Email History

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some Email History
Date: 12 Jul, 2024
Blog: Facebook
some early email history
https://www.multicians.org/thvv/mail-history.html

then some of the CTSS people went to the 5th flr for MULTICS and others went to the IBM Science Center on the 4th flr and did virtual machine CP40/CMS on 360/40 with hardware virtual memory mods ... morphs into CP67/CMS when 360/67 standard with virtual memory becomes available. I was undergraduate at univ but hired fulltime, responsible for OS/360 (360/67 for tss/360 replacing 709/1401, but tss/360 never came to production so ran as 360/65 with os/360), univ shutdown datacenter on weekends and I would have it dedicated although 48hrs w/o sleep made Monday classes hard. Then CSC came out to install CP67 (3rd after CSC itself and MIT Lincol Labs) and I mostly played with it in my weekend dedicated time.

CSC initially had 1052 and 2741 terminal support. At univ, I add TTY/ASCII support which CSC picks up and incorporates in standard distributed CP67, Account of MIT Urban Lab CP67 (across tech sq quad from 545) CP67 crashing 27 times in single day (I had done hack for single byte/255char line lengths and somebody down at harvard wanted to use ASCII device with max 1200 char ... they failed to fiddle all the one byte hacks).
https://www.multicians.org/thvv/360-67.html

CSC 1st had message/mail for users on the same machine. Then one of the CSC people did science center wide-area network (morphing into the corporate network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s, technology also used for the corporate sponsored univ BITNET, also for a time larger than arpanet/internet) ... by one of the people inventing GML at science center in 1969:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Before I graduate, I'm hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). Then when I graduate I join the science center (instead of staying with CFO office). One of my hobbies was enhanced production operating systems for internal datacenter and the US online (branch office) sales&marketing support HONE systems were long time customers. Early 70s, decision to start deploying HONE to rest of world and was asked to do some early non-US HONE installs, 1st in Paris ... and while there had to figure out how to do email back to the states.

In Aug1976, TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
https://en.wikipedia.org/wiki/Tymnet
started offering their CMS-based online computer conferencing "free" to (IBM mainframe user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

and I cut deal with TYMSHARE to get monthly tape dump of all VMSHARE files for putting up on internal network and systems

At the great 1jan1983 internetworking protocol cut-over there were approx. 100 IMPs and 255 hosts, while the internal network was rapidly approaching 1000, which it passes a few months later. Archived post with corporate locations that added one or more new nodes during 1983
https://www.garlic.com/~lynn/2006k.html#8

IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
posts
https://www.garlic.com/~lynn/subtopic.html#hone
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET (and EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

some posts mentioning CTSS/EMAIL history
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2023b.html#88 Online systems fostering online communication
https://www.garlic.com/~lynn/2022h.html#122 The History of Electronic Mail
https://www.garlic.com/~lynn/2022c.html#27 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022c.html#25 IBM Mainframe time-sharing
https://www.garlic.com/~lynn/2021h.html#50 PROFS
https://www.garlic.com/~lynn/2021g.html#92 Was E-mail a Mistake? The mathematics of distributed systems suggests that meetings might be better
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#31 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017c.html#21 Congratulations IBM for 'inventing' out-of-office email. You win Stupid Patent of the Month
https://www.garlic.com/~lynn/2014d.html#39 [CM] Ten recollections about the early WWW and Internet
https://www.garlic.com/~lynn/2012h.html#51 The Invention of Email
https://www.garlic.com/~lynn/2012c.html#15 Authorized functions
https://www.garlic.com/~lynn/2012c.html#12 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2012b.html#81 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2011h.html#49 OT The inventor of Email - Tom Van Vleck
https://www.garlic.com/~lynn/2011h.html#44 OT The inventor of Email - Tom Van Vleck

--
virtualization experience starting Jan1968, online at home since Mar1970

Joe Biden Kicked Off the Encryption Wars

From: Lynn Wheeler <lynn@garlic.com>
Subject: Joe Biden Kicked Off the Encryption Wars
Date: 13 Jul, 2024
Blog: Facebook
Joe Biden Kicked Off the Encryption Wars
https://newsletter.pessimistsarchive.org/p/joe-biden-kicked-off-the-encryption

I got HSDT in the early 80s (T1 and faster computer links, both terrestrial and satellite) ... corporate requirement that links had to be encrypted and I hated what I had to pay for T1 link encryptors and faster encryptors were hard find. I did some benchmarking of software (DES) encryption and it would require both IBM 3081K processors dedicated to handle DES encryption/decryption for single T1 full-duplex link. I then became involved in link encryptor that would handle 3mbytes/sec and cost less than $100 to make. The corporate DES encryption group told me it was much weaker than DES encryption (and couldn't be used). It took me three months to figure out how to explain to the group, rather than much weaker, it was much stronger than DES encryption. It was hollow victory, I was then told there was only one operation in the world allowed to use such encryption. I could make as many as I wanted, but they all had to be sent to address on the east coast. It was when I realized there was three kinds of crypto, 1) the kind they don't care about, 2) the kind you can't do, and 3) the kind you can only do for them.

Some 15 or so yrs later, was rep to financial standards including "commercial" key-escrow meetings. I did presentation on how escrow of keys used for authentication would be violation of basic security practices (santa cruz meeting?). Some people got upset claiming that users could misuse authentication keys for encryption (I believe it was last meeting of that particular committee). Part of the issue was business information in electronic transactions could be used for fraudulent transactions. I worked hard on transaction standard with much stronger authentication, which eliminated the anti-fraud requirement for encrypting transactions. The eliminating encryption requirement for financial transactions made some gov. types happy but were unhappy that strong authentication required that the keys couldn't be escrowed (as well as authentication didn't mean identification ...aka PAIN/CAIN: Privacy/Confidential, Authentication, Identification, Non-repudiation)

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
three factor authentication posts
https://www.garlic.com/~lynn/subintegrity.html#3factor
risk, fraud, explots, threats, vulnerability posts
https://www.garlic.com/~lynn/subintegrity.html#fraud
secrets and accont numbers
https://www.garlic.com/~lynn/subintegrity.html#secrets

posts mentioning realizing there were three kinds of crypto
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2023f.html#79 Vintage Mainframe XT/370
https://www.garlic.com/~lynn/2022d.html#73 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#29 Network Congestion
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021e.html#75 WEB Security
https://www.garlic.com/~lynn/2021e.html#58 Hacking, Exploits and Vulnerabilities
https://www.garlic.com/~lynn/2021c.html#70 IBM/BMI/MIB
https://www.garlic.com/~lynn/2021b.html#57 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#22 IBM Recruiting
https://www.garlic.com/~lynn/2021b.html#8 IBM Travel
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2018d.html#33 Online History
https://www.garlic.com/~lynn/2017g.html#91 IBM Mainframe Ushers in New Era of Data Protection
https://www.garlic.com/~lynn/2017g.html#35 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#58 A flaw in the design; The Internet's founders saw its promise but didn't foresee users attacking one another
https://www.garlic.com/~lynn/2016e.html#31 How the internet was invented
https://www.garlic.com/~lynn/2016c.html#57 Institutional Memory and Two-factor Authentication
https://www.garlic.com/~lynn/2015h.html#3 PROFS & GML
https://www.garlic.com/~lynn/2015e.html#2 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015c.html#85 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2014j.html#77 No Internet. No Microsoft Windows. No iPods. This Is What Tech Was Like In 1984
https://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014e.html#27 TCP/IP Might Have Been Secure From the Start If Not For the NSA
https://www.garlic.com/~lynn/2014e.html#25 Is there any MF shop using AWS service?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#9 NSA seeks to build quantum computer that could crack most types of encryption
https://www.garlic.com/~lynn/2013l.html#23 Teletypewriter Model 33
https://www.garlic.com/~lynn/2013i.html#69 The failure of cyber defence - the mindset is against it
https://www.garlic.com/~lynn/2012.html#63 Reject gmail
https://www.garlic.com/~lynn/2011n.html#63 ARPANET's coming out party: when the Internet first took center stage
https://www.garlic.com/~lynn/2011h.html#0 We list every company in the world that has a mainframe computer
https://www.garlic.com/~lynn/2010o.html#43 Internet Evolution - Part I: Encryption basics
https://www.garlic.com/~lynn/2009p.html#32 Getting Out Hard Drive in Real Old Computer
https://www.garlic.com/~lynn/2008i.html#86 Own a piece of the crypto wars
https://www.garlic.com/~lynn/2008h.html#87 New test attempt

--
virtualization experience starting Jan1968, online at home since Mar1970

Some work before IBM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Some work before IBM
Date: 13 Jul, 2024
Blog: Facebook
I took two credit hr intro fortran/computers and at end of semester was hired to rewrite 1401 MPIO for 360/30. Univ was getting 360/67 for TSS/360 to replace 709/1401 and temporarily got 360/30 replacing 1401 pending availability of 360/67. Within a year of taking intro class, the 360/67 came in and I was hired fulltime responsible for OS/360 (tss/360 didn't come to production so ran as 360/65). Student fortran ran under a second on 709 (tape->tape), but over a minute on os/360 (fortgclg). I install HASP and cuts time in half. I then carefully reorder stage2 sysgen for placement of datasets and PDS members to optimize disk arm seek and multi-track searches, reducing time by another 2/3rds to 12.9secs. Student Fortran never gets better than 709 until I install Univ. of Waterloo WATFOR.

Later the Univ. Library gets an ONR grant to do online catalog, some of the money goes for an IBM 2321 datacell and the project was also selected as betatest for original CICS product ... and CICS support was added to my tasks. First problem was CICS wouldn't come up. Problem was that CICS had some undocumented, hardcodied BDAM options and library had built BDAM datasets with different set of options. Yelavich URLs gone 404, but live on at wayback machine
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

Also CSC had come by to install CP67 (3rd after CSC itself and MIT Lincoln Labs) and I get to play with it mostly during my weekend dedicated time. I spend first few months rewriting CP67 to cut CPU overhead running OS/360. OS/360 jobstream ran 322secs on bare machine, initially 856secs in virtual machine; CP67 CPU 534secs got it down to 113secs.

Before I graduate I'm hired fulltime in small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton largest datacenter in the world, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll, although they enlarge the machine room for a 360/67 for me to play with when I wasn't doing other stuff. When I graduate, I join IBM science center (instead of staying with Boeing CFO).

CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#cics
IBM CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech

Posts mentioning Boeing CFO, BCS, Renton, 1401 MPIO, fortran/watfor, and CP/67
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles

--
virtualization experience starting Jan1968, online at home since Mar1970

Other Silicon Valley

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Other Silicon Valley
Date: 13 Jul, 2024
Blog: Facebook
aug1976, TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
started offering its (VM370/)CMS-based online computer conferencing free to SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare
I cut a deal with TYMSHARE to get a monthly tape dump of all VMSHARE files for putting up on internal systems and network.

SLAC was sponsoring monthly user group meetings and afterwards we would usually adjourn to either the "O" or the "Goose". MD was buying TYMSHARE in 1984 and I was brought in to evaluate GNOSIS for the spinoff, also asked if I could find anybody in IBM to make Engelbart an offer. SLAC was doing 168E & then 3081E with CERN .. emulate 360 sufficient to run FORTRAN, placed along the line to do initial data reduction/analysis:
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf
and then SLAC had 1st webserver in the US:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

posts mentioning TYMSHARE VMSHARE online computer conferencing, Engelbart, Gnosis, SLAC
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2018f.html#77 Douglas Engelbart, the forgotten hero of modern computing
https://www.garlic.com/~lynn/2014d.html#44 [CM] Ten recollections about the early WWW and Internet
https://www.garlic.com/~lynn/2012i.html#40 GNOSIS & KeyKOS
https://www.garlic.com/~lynn/2012i.html#39 Just a quick link to a video by the National Research Council of Canada made in 1971 on computer technology for filmmaking

--
virtualization experience starting Jan1968, online at home since Mar1970

Other Silicon Valley

From: Lynn Wheeler <lynn@garlic.com>
Subject: Other Silicon Valley
Date: 14 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#77 Other Silicon Valley

Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167
Ann rose up to become Vice President of the Integrated Systems Division at Tymshare, from 1976 to 1984, which did online airline reservations, home banking, and other applications. When Tymshare was acquired by McDonnell-Douglas in 1984, Ann's position as a female VP became untenable, and was eased out of the company by being encouraged to spin out Gnosis, a secure, capabilities-based operating system developed at Tymshare. Ann founded Key Logic, with funding from Gene Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl mainframes. After closing Key Logic, Ann became a consultant, leading to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89
Ann Hardy is a crucial figure in the story of Tymshare and time-sharing. She began programming in the 1950s, developing software for the IBM Stretch supercomputer. Frustrated at the lack of opportunity and pay inequality for women at IBM -- at one point she discovered she was paid less than half of what the lowest-paid man reporting to her was paid -- Hardy left to study at the University of California, Berkeley, and then joined the Lawrence Livermore National Laboratory in 1962. At the lab, one of her projects involved an early and surprisingly successful time-sharing operating system.

... snip ...

If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/

past posts mention Ann Hardy
https://www.garlic.com/~lynn/2024d.html#47 E-commerce
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023e.html#62 IBM Jargon
https://www.garlic.com/~lynn/2023e.html#12 Tymshare
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#37 Online Forums and Information
https://www.garlic.com/~lynn/2023d.html#16 Grace Hopper (& Ann Hardy)
https://www.garlic.com/~lynn/2023c.html#97 Fortran
https://www.garlic.com/~lynn/2023b.html#35 When Computer Coding Was a 'Woman's' Job
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022g.html#92 TYMSHARE
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021k.html#0 Women in Computing
https://www.garlic.com/~lynn/2021j.html#71 book review: Broad Band: The Untold Story of the Women Who Made the Internet
https://www.garlic.com/~lynn/2021h.html#98 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2019d.html#27 Someone Else's Computer: The Prehistory of Cloud Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Other Silicon Valley

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Other Silicon Valley
Date: 14 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#77 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#78 Other Silicon Valley

Tymnet
https://en.wikipedia.org/wiki/Tymnet
In 1984 Tymnet was bought by the McDonnell Douglas Corporation as part of the acquisition of Tymshare.[6] The company was renamed McDonnell Douglas Tymshare, and began a major reorganization. A year later, McDonnell Douglas (MD) split Tymshare into several separate operating companies: MD Network Systems Company, MD Field Service Company, MD RCS, MD "xxx" and many more. (This is sometimes referred to the Alphabet Soup phase of the company). At this point, Tymnet had outlived its parent company Tymshare.

... snip ...

Tymshare
https://en.wikipedia.org/wiki/Tymshare
McDonnell Douglas was acquired by Boeing. Consequently, rights to use technology developed by Tymshare are currently held by Boeing, British Telecom (BT), Verizon Communications, and AT&T Inc. due to the acquisitions and mergers from 1984 through 2005.

... snip ...

Tymnet Sold off
https://en.wikipedia.org/wiki/Tymshare#MDC_Network_Systems_Company_sold_to_British_Telecom
On July 30, 1989, it was announced that British Telecom was purchasing McDonnell Douglas Network Systems Company, and McDonnell Douglas Field Service Company was being spun off as a start-up called NovaDyne. McDonnell Douglas was later acquired by Boeing. British Telecom (BT) wanted to expand and the acquisition of Tymnet which was already a worldwide data network helped to achieve that goal. On November 17, 1989 MDNSC officially became BT Tymnet with its headquarters in San Jose, California. BT brought with it the idea of continuous development with teams in America, Europe, and Asia-pacific all working together on the same projects. BT renamed the Tymnet services, Global Network Services (GNS).

... snip ...

... trivia: I had taken two credit hr intro to fortran/computer and end of semester was hired to rewrite 1401 MPIO for 360/30. Univ was getting 360/67 to replace 709/1401 and temporarily got 360/30 replacing 1401, pending delivery of 360/67. Within year, 360/67 arrived and I was hired fulltime responsible for OS/360 (tss/360 didn't come to production so 360/67 ran as 360/65). Before I graduate, I was hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I thought Renton datacenter possibly largest in the world, 360/65s arriving faster than they could be instaled, boxes constantly staged in hallways around the machine room. Lots of politics between Renton director and CFO who only has 360/30 up at Boeing field for payroll, although they enlarge the machine room for a 360/67, for me to play with when I wasn't doing other stuff. When I graduate, I join the IBM science center (instead of staying with Boeing CFO).

Keeping in contact with Boeing, claims about aquisition of M/D, 2016, one of the "The Boeing Century" articles was about how the merger with MD has nearly taken down Boeing and may yet still (infusion of military industrial complex culture into commercial operation)
https://issuu.com/pnwmarketplace/docs/i20160708144953115

The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout
Unlike Boeing, McDonnell Douglas was run by financiers rather than engineers. And though Boeing was the buyer, McDonnell Douglas executives somehow took power in what analysts started calling a "reverse takeover." The joke in Seattle was, "McDonnell Douglas bought Boeing with Boeing's money."

... snip ...

Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution
Sorscher had spent the early aughts campaigning to preserve the company's estimable engineering legacy. He had mountains of evidence to support his position, mostly acquired via Boeing's 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO who liked to use what he called the "Hollywood model" for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers. In 2000, Boeing's engineers staged a 40-day strike over the McDonnell deal's fallout; while they won major material concessions from management, they lost the culture war. They also inherited a notoriously dysfunctional product line from the corner-cutting market gurus at McDonnell.

... snip ...

Boeing's travails show what's wrong with modern capitalism. Deregulation means a company once run by engineers is now in the thrall of financiers and its stock remains high even as its planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

posts mentioning Boeing/MD "merger"
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2018c.html#60 11 crazy up-close photos of the F-22 Raptor stealth fighter jet soaring through the air
https://www.garlic.com/~lynn/2018c.html#26 DoD watchdog: Air Force failed to effectively manage F-22 modernization
https://www.garlic.com/~lynn/2018c.html#21 How China's New Stealth Fighter Could Soon Surpass the US F-22 Raptor
https://www.garlic.com/~lynn/2017k.html#58 Failures and Resiliency
https://www.garlic.com/~lynn/2016e.html#20 The Boeing Century

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ATM At San Jose Plant Site

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ATM At San Jose Plant Site
Date: 14 Jul, 2024
Blog: Facebook
after being blamed for online computer conferencing on the internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s), I was transferred from san jose research to yorktown ... but left in San Jose besides SJR office (but had to commute to YKT a couple times a month), I got part of wing and labs out in the Los Gatos lab (and when research moved to almaden, office up there).

Down in LSG basement there was still stuff from ATM development and testing
https://groups.google.com/g/alt.folklore.computers/c/I-vj0q8jlko/m/uF9VEzGYAwAJ
https://en.wikipedia.org/wiki/IBM_3624
https://www.ibm.com/history/atm

archive above a.f.c. post in google groups
https://www.garlic.com/~lynn/2016e.html#13 Looking for info on IBM ATMs - 2984, 3614, and 3624

CMC posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801 posts
https://www.garlic.com/~lynn/subtopic.html#801

summary of payment protocol work
https://www.garlic.com/~lynn/x959.html
post mentioning the work
https://www.garlic.com/~lynn/subpubkey.html#x959

posts mentioning internet council trials (including 2016e.html#13)
https://www.garlic.com/~lynn/2022b.html#103 AADS Chip Strawman
https://www.garlic.com/~lynn/2021k.html#17 Data Breach
https://www.garlic.com/~lynn/2021h.html#74 "Safe" Internet Payment Products
https://www.garlic.com/~lynn/2018f.html#97 America's janky payment system, explained
https://www.garlic.com/~lynn/2018c.html#95 Tandem Memos
https://www.garlic.com/~lynn/2017i.html#44 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017e.html#76 Typesetting
https://www.garlic.com/~lynn/2017d.html#92 Old hardware
https://www.garlic.com/~lynn/2016e.html#13 Looking for info on IBM ATMs - 2984, 3614, and 3624
https://www.garlic.com/~lynn/2016.html#100 3270 based ATMs
https://www.garlic.com/~lynn/2016.html#66 Lineage of TPF
https://www.garlic.com/~lynn/2015h.html#90 History--did relay logic (e.g. crossbar switch) need air conditioning?
https://www.garlic.com/~lynn/2015h.html#1 FALSE: Reverse PIN Panic Code
https://www.garlic.com/~lynn/2014l.html#55 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014k.html#53 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2014g.html#37 Special characters for Passwords
https://www.garlic.com/~lynn/2014f.html#17 Online Debit, Credit Fraud Will Soon Get Much Worse
https://www.garlic.com/~lynn/2014e.html#64 How the IETF plans to protect the web from NSA snooping
https://www.garlic.com/~lynn/2013j.html#21 8080 BASIC
https://www.garlic.com/~lynn/2012c.html#61 PC industry is heading for more change
https://www.garlic.com/~lynn/2012b.html#71 Password shortcomings
https://www.garlic.com/~lynn/2011b.html#11 Credit cards with a proximity wifi chip can be as safe as walking around with your credit card number on a poster
https://www.garlic.com/~lynn/2011.html#41 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOSor Windows
https://www.garlic.com/~lynn/2010k.html#28 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010k.html#14 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010h.html#69 Idiotic programming style edicts
https://www.garlic.com/~lynn/2010h.html#54 Trust Facade
https://www.garlic.com/~lynn/2010f.html#27 Should the USA Implement EMV?
https://www.garlic.com/~lynn/2010f.html#26 Should the USA Implement EMV?
https://www.garlic.com/~lynn/2009r.html#16 70 Years of ATM Innovation
https://www.garlic.com/~lynn/2009p.html#44 Nearly 500 People Fall Victim to ATM Skimming Scam
https://www.garlic.com/~lynn/2009n.html#71 Sophisticated cybercrooks cracking bank security efforts
https://www.garlic.com/~lynn/2009n.html#4 Voltage SecureData Now Provides Distributed End-to-End Encryption of Sensitive Data
https://www.garlic.com/~lynn/2009i.html#12 Latest Pilot Will Put Online PIN Debit to the Test for Credit Unions
https://www.garlic.com/~lynn/2009g.html#64 What happened to X9.59?
https://www.garlic.com/~lynn/2008p.html#69 ATM PIN through phone or Internet. Is it secure? Is it allowed by PCI-DSS?, Visa, MC, etc.?
https://www.garlic.com/~lynn/2008p.html#31 FC5 Special Workshop CFP: Emerging trends in Online Banking and Electronic Payments
https://www.garlic.com/~lynn/2008p.html#28 Can Smart Cards Reduce Payments Fraud and Identity Theft?
https://www.garlic.com/~lynn/aadsm14.htm#31 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/aadsm14.htm#28 Maybe It's Snake Oil All the Way Down
https://www.garlic.com/~lynn/aadsm12.htm#8 [3d-secure] 3D Secure and EMV
https://www.garlic.com/~lynn/aadsm12.htm#1 3D Secure GUI
https://www.garlic.com/~lynn/aepay11.htm#65 E-merchants Turn Fraud-busters (somewhat related)

--
virtualization experience starting Jan1968, online at home since Mar1970

APL and REXX Programming Languages

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: APL and REXX Programming Languages
Date: 15 Jul, 2024
Blog: Facebook
long ago and far away, after graduating and joining IBM, one of my hobbies was enhanced production operating systems for internal data centers and HONE was long time customer. 23Jun1969 IBM unbundling announcement started charging for SE (customer support) time, maint, (application) software (managed to make the case that kernel software was still free). Normal SE training included part of large group at customer location, but they couldn't figure out how to NOT charge for trainee time. Thus was born HONE, originally US CP67 datacenters with branch office online access where SEs could practice with guest operating systems running in virtual machines. The science center had also ported APL\360 to CP67/CMS for CMS\APL, with fixes for running in large, demand paged virtual memory and APIs for system services like file I/O. HONE then started deploying CMS\APL-based branch office online sales&marketing support applications which came to dominate all HONE activity (SE practicing with guest operating systems withered away). I was then asked to do some of the first non-US HONE installations ... and then migration to VM370/CMS with HONE clone installations (and their APL-bsed applications) propagating all over the world (by far, largest use of APL in the world). Trivia: in mid-70s, all US HONE datacenters were consolidated in silicon valley, when FACEBOOK 1st moved into silicon valley, it was into new bldg built next door to the former consolidated US HONE datacenter bldg.

not so long ago and far away (spring '82) ... before renamed and released as rexx, I wanted to show it wasn't just another pretty scripting language, demo was to redo a very large assembler program (program failure and dump analysis) in three months elapsed time working half time, with ten times the function and ten times the performance (some hacks to have interpreted language running faster than assembler); I finished early so decided to implement some automated scripts that searched for common failure signatures.

I thought that it would be released to customers, but for whatever it wasn't (even though nearly every PSR and internal datacenter was using it, this was early in the OCO-wars, "object code only" ... customers complaining that source would no longer be available). I did manage to get approval to give user group presentations on how I did the implementation ... and within a few months, similar implementations started appearing. I eventually did get a request from the 3090 service processor (3092) group to release it on the service processor.
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
some old email
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223
int this archived post
https://www.garlic.com/~lynn/2010e.html#32

HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969, unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

APL and REXX Programming Languages

From: Lynn Wheeler <lynn@garlic.com>
Subject: APL and REXX Programming Languages
Date: 15 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#81 APL and REXX Programming Languages

a little internet content, member of science center responsible for the science center CP/67-based wide-area network; account by one of the GML inventors (invented 1969 at the science center)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

science center wide-area network morphing into corporate network (larger than arpanet/internet from inception as CP67-based wide-area network, until sometime mid/late 80s) ... technology also used for the corporate sponsored univ BITNET&EARN
https://en.wikipedia.org/wiki/BITNET
https://en.wikipedia.org/wiki/European_Academic_and_Research_Network
https://earn-history.net/technology/the-network/

Edson
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
bitnet (& earn) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Continuations

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Continuations
Newsgroups: comp.arch
Date: Mon, 15 Jul 2024 12:31:46 -1000
A little over a decade ago, I was asked if I could track down decision to add virtual memory to all IBM 370s and found a former staff member to executive making the decision. Basically OS/360 MVT storage management was so bad that region sizes had to be specified four times larger than used ... as a result only four concurrent regions could be running concurrently on typical 1mbyte 370/165 system, insufficient to keep system busy and justified.

Mapping MVT to a 16mbyte virtual address space (vs2/svs, similar to running MVT in a CP67 16mbyte virtual machine), allowed concurrently running regions increased by factor of four (limited to 15 with 4bit storage protect keys, keeping regions isolated). However, as systems increase, even fifteen weren't sufficient and they went to providing a separate 16mbyte virtual address space for each region (VS2/MVS). However OS/360 heavy pointer-passing API, resulted in mapping a 8mbyte image of the MVS kernel into every virtual address space (kernel API call would be able to access API parameters) ... leaving 8mbytes for application. Then because subsystems were also placed in their own separate 16mbyte virtual address space ... for subsystems access to caller's parameters, they mapped a 1mbyte "common segment area" (CSA) into every virtual address space (leaving 7mbyes for applications). However CSA space requirement was proportional to concurrent executing applications and number of subsystems ... and quickly became multi-mbyte "common system area". By 3033 time-frame CSA was frequently 5-6mbytes (leaving 2-3mbytes for applications) and threatening to become 8mbytes (leaving zero).

This was part of the mad rush to 370/XA & MVS/XA with 31-bit addressing and "access registers" (semi-privileged subsystem could access caller's address space "as secondary address space"). As temporary stop-gap, a subset of "access registers" were retrofitted to 3033 for MVS, as "dual-address space mode". However, kernel code was still required to swap hardware address space pointers (subsystem call, passing through kernel, moving caller's address space pointer to secondary, and loading subsystems address space pointer as primary ... and then restoring on return). Then got hardware support with a "program call" table ... entry for each subsystem and subsystem call could be handled all by hardware, including the address space pointer swapping (w/o overhead of kernel software).

"program transfer" ... more like continuation,

1983 370/XA Pinciples of Operation SSA22-7085
https://bitsavers.org/pdf/ibm/370/princOps/SA22-7085-0_370-XA_Principles_of_Operation_Mar83.pdf
pg3-5 Primary & Secondary Virtual Address
pg10-22 "Program Call"
pg10-28 "Program Transfer"


"program transfer" originally could also be used to restore CPU to state saved by previous "Program Call" (aka return, programming notes pg10-30)

Principles of Operation SA22-7832-13 (May 2022)
https://publibfp.dhe.ibm.com/epubs/pdf/a227832d.pdf
pg3-23 "Address Spaces"
pg10-97 "Program Call"
pg10-110 "Program Return"
pg10-114 "Program Transfer"


posts mentioning "program call" and/or "dual address space"
https://www.garlic.com/~lynn/2022c.html#69 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2021i.html#17 Versatile Cache from IBM
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018.html#96 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017i.html#57 64 bit addressing into the future
https://www.garlic.com/~lynn/2016e.html#3 S/360 stacks, was self-modifying code, Is it a lost cause?
https://www.garlic.com/~lynn/2013m.html#71 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2012p.html#26 Mainframes are still the best platform for high volume transaction processing
https://www.garlic.com/~lynn/2012n.html#21 8-bit bytes and byte-addressed machines
https://www.garlic.com/~lynn/2010d.html#81 LPARs: More or Less?
https://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
https://www.garlic.com/~lynn/2008e.html#14 Kernels
https://www.garlic.com/~lynn/2008d.html#69 Regarding the virtual machines
https://www.garlic.com/~lynn/2006y.html#16 "The Elements of Programming Style"
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?

other recent posts mentioning common segment/system area
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#67 IBM Mainframe Addressing
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#27 HA/CMP
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#48 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#29 Another IBM Downturn
https://www.garlic.com/~lynn/2023g.html#2 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#36 "The Big One" (IBM 3033)
https://www.garlic.com/~lynn/2023d.html#22 IBM 360/195
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#93 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#49 IBM 3033 Personal Computing
https://www.garlic.com/~lynn/2022b.html#19 Channel I/O
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2020.html#36 IBM S/360 - 370

--
virtualization experience starting Jan1968, online at home since Mar1970

ATT/SUN and Open System Foundation

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: ATT/SUN and Open System Foundation
Date: 16 Jul, 2024
Blog: Facebook
ATT/SUN and Open System Foundation
https://en.wikipedia.org/wiki/Open_Software_Foundation
It was intended as an organization for joint development, mostly in response to a perceived threat of "merged UNIX system" efforts by AT&T Corporation and Sun Microsystems.

... snip ...

Early 80s, IBM had 801/RISC ROMP chip originally targeted for Displaywriter followon
https://en.wikipedia.org/wiki/IBM_Displaywriter_System
when that got killed, they decided to retarget to the Unix workstation market and hired the company that had done AT&T port for IBM/PC PC/IX, to do part for the "PC/RT" which becomes AIX.
https://en.wikipedia.org/wiki/IBM_RT_PC

The IBM Palo Alto group was in the process of doing BSD port for IBM 370 ... and was then told to retarget for the PC/RT, which ships as "AOS".

Then the follow-on 801/RISC chipset, RIOS was for the RS/6000 and lots of BSDism was incorporated into AIX
https://en.wikipedia.org/wiki/IBM_RS/6000

We had started the RS/6000 HA/6000 in 1989, originally for the NYTimes to port their newspaper system (ATEX) from VAXCluster to RS/6000. I then rename/rebrand it was HA/CMP when I start doing numberic/scientific cluster scale-up with the national labs and commercial cluster scale-up with RDBMS vendors (that had VAXCluster support in the same source base with UNIX, I even do a distributed lock manager supporting VAXcluster API semantics to simplify the port).

RS/6000 AIX ran BSD RENO/TAHOE TCP/IP and I wanted to do IP-address "take-over" as part of HA/CMP fault recovery. However, found RENO/TAHOE was major TCP/IP support on lots of different unix vendor clients. The problem found was that while the DNS cache and ARP cache had time-out, but found a "fastpath" where the immediately previous IP MAC address was saved and if next packet was same IP address, it would use the saved MAC (not checking the ARP cache) ... impacting scenarios where all client traffic (for long periods) was from the (same) server. Had to come up with server hack to keep track of client ip-addresses and as part of take-over, had to send "ping" with different ip-address to each (saved) client, forcing their check of their ARP cache.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

some posts mentioning HA/CMP distributed lockmanager
https://www.garlic.com/~lynn/2024d.html#52 Cray
https://www.garlic.com/~lynn/2024c.html#105 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024b.html#80 IBM DBMS/RDBMS
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#55 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#29 DB2
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#93 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023e.html#86 Relational RDBMS
https://www.garlic.com/~lynn/2022e.html#103 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022c.html#73 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022c.html#63 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#62 IBM DB2
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2014k.html#40 How Larry Ellison Became The Fifth Richest Man In The World By Using IBM's Idea
https://www.garlic.com/~lynn/2009h.html#26 Natural keys vs Aritficial Keys
https://www.garlic.com/~lynn/2009b.html#43 "Larrabee" GPU design question
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures

--
virtualization experience starting Jan1968, online at home since Mar1970

ATT/SUN and Open System Foundation

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: ATT/SUN and Open System Foundation
Date: 16 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#84 ATT/SUN and Open System Foundation

trivia: IBM Palo Alto was also working with UCLA Locus people and while the 370 BSD work got redirected to PC/RT, they did port of Locus to 370 ... which becomes AIX/370 (and AIX/386).

... lots of topic drift that includes some NSFNET, when I 1st joined IBM it was in the cambridge science center (some of the MIT CTSS/7094 people had gone to do MULTICS on the 5th flr and others went to the science center on the 4th flr) in 545 tech sq (blds since remodeled and renumbered). Late 70s some number of us transferred to San Jose ... I was in San Jose Research. I got blamed for online computer conferencing in the late 70s and early 80s, it really took off spring1981, when I distributed a trip report of visit to Jim Gray at Tandem (folklore is when corporate executive committee was told, 5of6 wanted to fire me). One of the outcomes was I was transferred to YKT (aka Watson), left to live in San jose with office in Research and part of wing in Los Gatos lab, but had to commute to YKT a couple times a month. I was also told with most of the corporate executive committee wanted to fire me, they would never approve me for a fellow, but if I kept my head down, money could be steered my way and have projects almost as if I was a fellow.

One was HSDT, T1 and faster computer links (both terrestrial and satellite) ... one of the first was T1 satellite between Los Gatos lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (had a bunch of FPS boxes, some with 40mbyte/sec disk arrays)
https://en.wikipedia.org/wiki/Floating_Point_Systems
Cornell University, led by physicist Kenneth G. Wilson, made a supercomputer proposal to NSF with IBM to produce a processor array of FPS boxes attached to an IBM mainframe with the name lCAP.

... snip ...

Then had a custom built TDMA satellite system with a transponder on SBS4 and three satellite dishes, 4.5M dishes in Los Gatos and YKT and a 7M dish in Austin. There was an EVE hardware logic simulator in SJ and claim was RIOS chip designs (in Austin) being able to use EVE simulator via satellite link help bring the RIOS chips in a year early.

Was also working with NSF director and was suppose to get $20M to interconnect the NSF supercomputer centers, then congress cut the budget, some other things happened and finally a RFP was released (in part based on what we already had running). Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

... IBM internal politics was not allowing us to bid (possibly contributing was being blamed for online computer conferencing). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, awarded 24Nov87)

1989 HA/6000 was pitched to Nick Donofrio (see above) and he approved it. Early Jan1992, had a meeting with Oracle CEO and staff where IBM AWD/Hester said HA/CMP would have 16processor clusters by mid92 and 128processor clusters by ye92 ... also updated IBM FSD about cluster scale-up work with national labs. Somebody must of told the IBM Kingston Supercomputer Group because by the end of Jan92, cluster scale-up was transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later. Some IBM press:
https://archive.org/details/sim_computerworld_1992-02-17_26_7
also 17feb92
https://www.garlic.com/~lynn/2001n.html#6000clusters1
11May92 press
https://www.garlic.com/~lynn/2001n.html#6000clusters2

trivia: with regard to "surprise comment", a decade earlier, branch office cons me into doing benchmark on engineer IBM 4341 I had access to, for a national lab that was looking at getting 70 for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami).

Besides technical/scientific cluster scaleup with national labs ... also had been working with LLNL having their LINCS/Unitree supercomputer filesystem ported to HA/CMP and with NCAR having their Mesa Archival supercomputer filesystem ported to HA/CMP.

15Jun92 press
https://www.garlic.com/~lynn/2001n.html#6000clusters3

trivia: with respect to comment about IBM 32-microprocessor 370 (end jul92 last day at IBM). In first half of the 70s, IBM had the Future System project to completely replace 370 (during which time internal politics was killing off 370 efforts, claim is that lack of new 370s during the period was responsible for the clone 370 makers getting their market foothold). When that implodes there was mad rush to get stuff back into the 370 product pipelines, including kicking off quick and dirty 3033&3081 efforts in parallel.

I also get roped into helping with a 16-processor SMP 370 that everybody thought was great until somebody tells the head of POK that it could be decades before POK's favorite son operating system (aka MVS) had (effective) 16-processor support (MVS documentation at the time was that a 2processor SMP got only 1.2-1.5 times the throughput of single processor ... and its SMP software overhead increased as the number of processors increased). POK doesn't ship a 16processor SMP until nearly 25yrs later after the turn of the century. As aside, Old 15Mar1985 email about being scheduled to present to NSF director and YKT wants me to spend the week there, in meeting discussing how many processor chips could be crammed into rack (and how many racks can be tied together).
https://www.garlic.com/~lynn/2007d.html#email850315

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some recent posts mentioning Palo Alto, BSD, UCLA Locus, aix/370
https://www.garlic.com/~lynn/2024.html#95 IBM AIX
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#13 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2023f.html#43 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023.html#39 IBM AIX
https://www.garlic.com/~lynn/2022d.html#79 ROMP
https://www.garlic.com/~lynn/2022c.html#30 Unix work-alike
https://www.garlic.com/~lynn/2022.html#8 DEC VAX, VAX/Cluster and HA/CMP
https://www.garlic.com/~lynn/2021k.html#64 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2021k.html#27 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021j.html#29 IBM AIX
https://www.garlic.com/~lynn/2021i.html#45 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021e.html#83 Amdahl
https://www.garlic.com/~lynn/2021d.html#83 IBM AIX
https://www.garlic.com/~lynn/2021b.html#51 CISC to FS to RISC, Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2019e.html#109 ROMP & Displaywriter

--
virtualization experience starting Jan1968, online at home since Mar1970

ATT/SUN and Open System Foundation

From: Lynn Wheeler <lynn@garlic.com>
Subject: ATT/SUN and Open System Foundation
Date: 17 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#84 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation

ROMP chip was joint research/office products 801/risc for displaywriter followon ... 801/risc CPr operating system programmed in PL.8. didn't need traditional hardware supervisor/problem states ... claim was pl.8 would only generate correct programs and cp.r would only load/execute correct programs ... "privileged" code could be invoked directly. retarget to unix required more traditional system.

Also austin needed something to do with their 200 CPr PL.8 programmers. They come up with VRM, abstract virtual machine and tell everybody that doing VRM and AIX to VRM interface is less resources than just doing AIX directly to PC/RT bare hardware. Note: when Palo Alto does BSD to PC/RT bare machine (for "AOS"), it is much fewer people and much less elapsed time than either VRM or AIX.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

posts mentioning 801/risc, VRM, AIX, and BSD/AOS
https://www.garlic.com/~lynn/2024.html#72 IBM AIX
https://www.garlic.com/~lynn/2022d.html#79 ROMP
https://www.garlic.com/~lynn/2021i.html#45 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021d.html#83 IBM AIX
https://www.garlic.com/~lynn/2011c.html#38 IBM "Watson" computer and Jeopardy
https://www.garlic.com/~lynn/2008e.html#10 Kernels
https://www.garlic.com/~lynn/2008d.html#83 Migration from Mainframe to othre platforms - the othe bell?
https://www.garlic.com/~lynn/2005u.html#61 DMV systems?
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2002i.html#81 McKinley Cometh

--
virtualization experience starting Jan1968, online at home since Mar1970

Benchmarking and Testing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Benchmarking and Testing
Date: 17 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#9 Benchmarking and Testing

more tank trivia: Doing some work with former Marine that was pushing open source for battle commanders. One point he made was the Marines were forced to accept M1 Abrams at 65-70tons (in order to achieve quantity discount from the manufacturer for the Army) even though 95% of Marine mission profiles involved places that had 35ton load limits. Claims that Abrams were designed for tank slug test in Europe and US underwrote the cost of bridge, roads, infrastructure upgrades in Germany (to handle Abrams weight).

some past posts mentioning Abrams weight
https://www.garlic.com/~lynn/2018b.html#81 What the Gulf War Teaches About the Future of War
https://www.garlic.com/~lynn/2017h.html#31 Disregard post (another screwup; absolutely nothing to do with computers whatsoever!)
https://www.garlic.com/~lynn/2012n.html#38 Jedi Knights
https://www.garlic.com/~lynn/2012h.html#9 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2011n.html#21 Goodbye, OODA-Loop
https://www.garlic.com/~lynn/2011m.html#58 computer bootlaces

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Virtual Memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Virtual Memory
Date: 17 Jul, 2024
Blog: Facebook
I had started in the 70s claiming that original 360 had made design trade-offs with abundant I/O resources and scarce real memory and CPU resources but by mid-70s that had started to invert and by the early 80s wrote a tome that disk relative system throughput had declined by an order of magnitude (disks got 3-5 times faster while systems got 40-50 times faster). A disk division executive took exception and assigned the division performance group to refute the claim. After a couple weeks they basically came back and said I had slightly understated the situation. They then respun the analysis for how to configure disks for system throughput (16Aug1984, SHARE 63, B874). As mismatch between disk throughput and system throughput increased, systems needing increasingly larger number of concurrently executing programs.

from long ago and far away (about gov. use CP67 in 60s&70s, separating different users)
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
melinda's VM history
https://www.leeandmelindavarian.com/Melinda#VMHist
other TYMSHARE started offering its VM370/CMS-based online computer conferencing to (user group) SHARE as VMSHARE in Aug1976, archives here http://vm.marist.edu/~vmshare

A little over a decade ago, I was asked if I could track down decision to add virtual memory to all 370s and found a former staff member to executive making the decision. Basically OS/360 MVT storage management was so bad that region sizes had to be specified four times larger than used ... as a result only four concurrent regions could be running concurrently on typical 1mbyte 370/165 system, insufficient to keep system busy and justified.

archived post with pieces of email exchange about decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2011d.html#73

Mapping MVT to a 16mbyte virtual address space (vs2/svs, similar to running MVT in a CP67 16mbyte virtual machine), allowed concurrently running regions increased by factor of four (limited to 15 with 4bit storage protect keys, keeping regions isolated). However, as systems increase, even fifteen weren't sufficient and they went to providing a separate 16mbyte virtual address space for each region (VS2/MVS). However OS/360 heavy pointer-passing API, resulted in mapping a 8mbyte image of the MVS kernel into every virtual address space (kernel API call would be able to access API parameters) ... leaving 8mbytes for application. Then because subsystems were also placed in their own separate 16mbyte virtual address space ... for subsystems access to caller's API parameters, they mapped a 1mbyte "common segment area" (CSA) into every virtual address space (leaving 7mbyes for applications). However CSA space requirement was proportional to concurrent executing applications and number of subsystems ... and CSA quickly became multi-mbyte "common system area". By 3033 time-frame CSA was frequently 5-6mbytes (leaving 2-3mbytes) and threatening to become 8mbytes (leaving zero for applications).

This was part of the mad rush to 370/XA & MVS/XA with 31-bit addressing and "access registers" (semi-privileged subsystem could access caller's address space "as secondary address space"). As temporary stop-gap, a subset of "access registers" were retrofitted to 3033 for MVS, as "dual-address space mode". However, kernel code was still required to swap hardware address space pointers (subsystem call, passing through kernel, moving caller's address space pointer to secondary, and loading subsystems address space pointer as primary ... and then restoring on return). Then got hardware support with a "program call" table ... entry for each subsystem and subsystem call could be handled all by hardware, including the address space pointer swapping (w/o overhead of kernel software).

1983 370/XA Pinciples of Operation SSA22-7085
https://bitsavers.org/pdf/ibm/370/princOps/SA22-7085-0_370-XA_Principles_of_Operation_Mar83.pdf
pg3-5 Primary & Secondary Virtual Address pg10-22 "Program Call" pg10-28 "Program Transfer"

... snip ...

Principles of Operation SA22-7832-13 (May 2022)
https://publibfp.dhe.ibm.com/epubs/pdf/a227832d.pdf
pg3-23 "Address Spaces"
pg10-97 "Program Call"
pg10-110 "Program Return"
pg10-114 "Program Transfer"


... snip ...

Other trivia: In late 70s I'm working with Jim Gray and Vera Watson on original SQL/relational implementation (System/R) at San Jose Research and in fall of 1980 Jim Gray leaves IBM for TANDEM and palms off some stuff on me. A year later, at Dec81 ACM SIGOPS meeting, Jim asked me to help a TANDEM co-worker get his Stanford PHD that heavily involved GLOBAL LRU (and the "local LRU" forces from 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving GLOBAL LRU). Jim knew I had detailed stats on the CP67 Cambridge/Grenoble global/local LRU comparison (showing global significantly outperformed local). Early 70s, IBM Grenoble Science Center had a 1mbyte 360/67 (155 4k pageable pages) running 35 CMS uses and had modified "standard" CP67 with working set dispatcher and local LRU page replacement ... corresponding to 60s academic papers. I was then at Cambridge which had 768kbyte 360/67 (104 4k pageable pages, only 2/3rds the number of Grenoble) and running 80 CMS users, similar kind of workloads, similar response, better throughput (with twice as many users) running my "standard" CP67 that I had originally done as undergraduate in the 60s. I had loads of Cambridge benchmarking&performance data, in addition to the Grenoble APR73 CACM article and lots of other detailed performance data from Grenoble.

I went to send reply with detailed data, but company executives block sending for nearly a year (I hoped they viewed it as punishment for being blamed for online computer conferencing on the internal network ... and not that they were meddling in an academic dispute).

More trivia: long ago and far away OS2 group were being told that VM370 did a much better job than OS2. They set an email to Endicott asking for information, Endicott sends it to IBM Kingston, and Kingston sends it to me ... some old email
https://www.garlic.com/~lynn/2007i.html#email871204
https://www.garlic.com/~lynn/2007i.html#email871204b

posts mentioning virtual memory and page replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#clock
posts mentioning dynamic adaptive resource management and dispatching/scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
benchmakring posts
https://www.garlic.com/~lynn/submain.html#benchmark

recent posts mentioning observing relative system disk throughput had declined by order of magnitude between 60s & 80s
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#109 Old adage "Nobody ever got fired for buying IBM"
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#92 IBM DASD 3380
https://www.garlic.com/~lynn/2023e.html#7 HASP, JES, MVT, 370 Virtual Memory, VS2
https://www.garlic.com/~lynn/2023b.html#26 DISK Performance and Reliability
https://www.garlic.com/~lynn/2023b.html#16 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#6 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#87 CICS (and other history)
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021i.html#23 fast sort/merge, OoO S/360 descendants
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021e.html#33 Univac 90/30 DIAG instruction
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#17 Performance History, 5-10Oct1986, SEAS

posts mentioning decision to add virtual memory to all 370s:
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024c.html#91 Gordon Bell
https://www.garlic.com/~lynn/2024c.html#55 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#65 MVT/SVS/MVS/MVS.XA
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#50 Slow MVS/TSO
https://www.garlic.com/~lynn/2023g.html#77 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#26 Ferranti Atlas
https://www.garlic.com/~lynn/2023d.html#9 IBM MVS RAS
https://www.garlic.com/~lynn/2023d.html#0 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#93 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#55 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#113 IBM Future System
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2018.html#92 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2017b.html#8 BSAM vs QSAM
https://www.garlic.com/~lynn/2016.html#78 Mainframe Virtual Memory
https://www.garlic.com/~lynn/2015g.html#90 IBM Embraces Virtual Memory -- Finally

--
virtualization experience starting Jan1968, online at home since Mar1970

John Boyd and IBM Wild Ducks

From: Lynn Wheeler <lynn@garlic.com>
Subject: John Boyd and IBM Wild Ducks
Date: 18 Jul, 2024
Blog: Facebook
John Boyd and IBM Wild Ducks
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

I had been introduced to John Boyd in the early 80s and use to sponsor his briefings at IBM. The Commandant of Marine Corp (recently passed this spring) had leveraged Boyd for a Corps "make-over" (about the same time that IBM was desparately in need of a make-over). There was then a parody about the reformers versus the attrtionists appears in the Marine Corps Gazette. I was once sitting next to anonymous author at MCU meeting, seem to have gone 404, but lives on at wayback machine
https://web.archive.org/web/20110817133447/http://www.mca-marines.org/gazette/attritionist-letters-archives
We have had a group of Marines, who I have allowed to remain anonymous, compile epistolary articles they have titled " The Attritionist Letters." They write provocatively about what they see as the ongoing clash between maneuver warfare advocates and attritionists. It is our hope that they will engender a spirited debate over the next several months as we publish their letters. I do not agree with every thing that they assert, but they also make points that are valid and well worth considering. One of the most important points I discovered soon after becoming the editor of the Gazette was that you will have the opportunity to publish points that you may or may not agree with and hope that the readers will take up the debate.

... snip ...

also archived here
https://fabiusmaximus.com/2011/05/11/27461

Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Virtual Memory

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Virtual Memory
Date: 18 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory

I did a talk at OCT86 (user group) SEAS (EU SHARE) ... and "recently" repeated it at Mar2011 (DC user group) HILLGANG
https://www.garlic.com/~lynn/hill0316g.pdf

from recent Ferranti Atlas posts
https://www.garlic.com/~lynn/2024b.html#39 Tonight's tradeoff
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2024b.html#96 Ferranti Atlas and Virtual Memory

Melinda Varian's history
https://www.leeandmelindavarian.com/Melinda#VMHist
https://www.leeandmelindavarian.com/Melinda/neuvm.pdf
from above, Les Comeau has written (about TSS/360)
Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.

What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work.35


... snip ...

Atlas reference (gone 403?, but lives free at wayback):
https://web.archive.org/web/20121118232455/http://www.ics.uci.edu/~bic/courses/JaverOS/ch8.pdf
Paging can be credited to the designers of the ATLAS computer, who employed an associative memory for the address mapping [Kilburn, et al., 1962]. For the ATLAS computer, |w| = 9 (resulting in 512 words per page), |p| = 11 (resulting in 2024 pages), and f = 5 (resulting in 32 page frames). Thus a 220-word virtual memory was provided for a 214- word machine. But the original ATLAS operating system employed paging solely as a means of implementing a large virtual memory; multiprogramming of user processes was not attempted initially, and thus no process id's had to be recorded in the associative memory. The search for a match was performed only on the page number p.

... snip ...

... referencing ATLAS used paging for large virtual memory ... but not multiprogramming (multiple concurrent address spaces). Cambridge had modified 360/40 with virtual memory and associative lookup that included both process-id and page number.
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

CP40 morphs into CP67 when 360/67 becomes available, standard with virtual memory. As an undergraduate in 60s, I had been hired fulltime for OS/360 running on 360/67 (as 360/65, originally was suppose to be for TSS/360). The univ shutdown datacenter on weekends and I would have it dedicated (although 48hrs w/o sleep made Monday classes difficult). CSC then came out to install CP/67 (3rd after CSC itself and MIT Lincoln Labs) and I mostly played with it during my dedicated time ... spent the 1st six months or so redoing pathlengths for running OS/360 in virtual machine. OS/360 benchmark was 322secs on bare machine, initially 856secs in virtual machine (CP67 CPU 534secs), got CP67 CPU down to 113secs (from 534secs).

I redid scheduling&paging algorithms and added ordered seek for disk i/o and chained page requests to maximize transfers/revolution (2301 fixed-head drum from peak 70/sec to peak 270/sec). CP67 page replacement to global LRU (at a time when academic literature was all about "local LRU"), which I also deployed at Cambridge after graduating and joining IBM. IBM Grenoble Scientific Center modified CP67 to implement "local" LRU algorithm for their 1mbyte 360/67 (155 page'able pages after fixed memory requirements). Grenoble had very similar workload as Cambridge but their throughput for 35users (local LRU) was about the same as Cambrige 768kbyte 360/67 (104 page'able pages) with 80 users (and global LRU) ... aka global LRU outperformed "local LRU" with more than twice the number of users and only 2/3rds the available memory.

... clip ...

posts mentioning virtual memory and page replacement algorithms
https://www.garlic.com/~lynn/subtopic.html#clock

other posts mentioning Ferranti Atlas
https://www.garlic.com/~lynn/2023f.html#25 Ferranti Atlas
https://www.garlic.com/~lynn/2022h.html#44 360/85
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022b.html#54 IBM History
https://www.garlic.com/~lynn/2022b.html#20 CP-67
https://www.garlic.com/~lynn/2017j.html#71 A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2015c.html#47 The Stack Depth
https://www.garlic.com/~lynn/2012l.html#37 S/360 architecture, was PDP-10 system calls
https://www.garlic.com/~lynn/2011d.html#81 Multiple Virtual Memory
https://www.garlic.com/~lynn/2007u.html#79 IBM Floating-point myths
https://www.garlic.com/~lynn/2007u.html#77 IBM Floating-point myths
https://www.garlic.com/~lynn/2007t.html#54 new 40+ yr old, disruptive technology
https://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar'
https://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to C?
https://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance?
https://www.garlic.com/~lynn/2006i.html#30 virtual memory
https://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP
https://www.garlic.com/~lynn/2003b.html#1 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003.html#72 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2002.html#42 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Virtual Memory
Date: 18 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#90 Computer Virtual Memory

in the late 70s at SJR, implemented a super efficient record I/O trace ... which was used monitoring and to feed configuration models, including cache modeling that compared file i/o caching for disk level caches, controller level caches, channel level caches and system level caches (aka processor memory for staging data, DBMS caches, etc). For a fixed amount of electronic store, system level cache always beat dividing it up and spreading around at lower level (which is effectively the same results I found in the 60s for global LRU beating "local LRU", i.e. partitioning cache always required increasing the total amount of electronic store). Easy for CMS activity ... but also used for production MVS systems running in virtual machines.

A side effect of the record level trace analysis ... realized that there were also file collections that had "bursty" activity ... i.e. weekly/monthly reports, etc. ... many of the files weren't otherwise needed except for the collective periodic use; found useful later in things like ADSM.

however, one of the early record I/O monitor/trace results was getting into conflicts with Tucson over the 3880 controller caches, aka Ironwood & Sheriff for 3380 disks, i.e. 3880-11 8mbyte 4kbyte/page record cache and 3880-13 8mbyte full track cache

Also issue about 3880 controller compared to 3830, while 3880 had hardware path for 3mbyte data transfer, the rest was (slow) vertical microcode microprocessor (compared to 3830 superfast horizontal microcode), except for 3380 data transfer ... all other 3880 operations had much higher channel busy (than 3830). The 3090 group had sized the number channels based on 3880 was same as 3830 (but with 3mbyte/sec transfer). When they found out how bad the 3880 was, they realized that they needed to greatly increase the number of channels (to achieve system target throughput) offsetting the increased channel busy. The extra channels required an extra TCM and 3090 group semi-facetiously joked that they would bill the 3880 group for the increased 3090 manufacturing cost. Eventually marketing respun the big increase in channels as wonderful I/O machine (when in fact was required to offset the slow 3880 controller channel busy increase).

Then from long ago and far away (POK wanted to make some of the trace/monitoring and reports available to the field for customers)
https://www.garlic.com/~lynn/2007.html#email800807
https://www.garlic.com/~lynn/2007.html#email800807b

In 1980, STL (since renamed SVL) is bursting at the seams and 300 people w/3720s, from the IMS group are being transferred to offsite bldg (about half-way between STL & SJR) with dataprocessing back to STL datacenter. They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into doing channel-extender support so they can install channel-attached 3270 controllers in the offsite bldg and there is no perceptible human factors difference between between offsite and in STL. A side-effect is that the STL 168s for the offsite group have 10-15% improvement in system throughput. The channel-attached 3270 controllers had been previously been spread across all the channels shared with disk controllers and the (relatively) slower 3270 controller channel busy was interfering with disk I/O. The channel-extender boxes were faster than even the disk controllers, drastically reducing the channel busy for 3270 traffic (improving disk and system throughput). There was even talk about placing all STL 3270 controllers behind channel-extender boxes just to improve all systems throughput. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get that vetoed. In 1988, IBM branch office asks if I could help LLNL standardize some stuff they are playing with, which quickly becomes fibre channel standard (including some stuff I had done in 1980), "FCS" initially 1gbit full-duplex, aggregate 200mbytes/sec. Then POK gets their stuff released with ES/9000 as ESCON (when it is already obsolete). Then some POK engineers get involved with FCS and define a heavy-weight protocol that radically reduces native throughput that eventually ships as FICON. Latest, public benchmark I can find is z196 "Peak I/O" that gets 2M IOPS with 104 FICON. About the same time a FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS have higher throughput than 104 FICON). Note IBM pubs have SAPs (system assist processors that do actual I/O) be held to 70% CPU ... which would be about 1.5M IOPS.

SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#SMP
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning DMKCOL
https://www.garlic.com/~lynn/2022b.html#83 IBM 3380 disks
https://www.garlic.com/~lynn/2022.html#83 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2013d.html#11 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012c.html#47 nested LRU schemes
https://www.garlic.com/~lynn/2011.html#71 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2011.html#70 Speed of Old Hard Disks
https://www.garlic.com/~lynn/2010i.html#18 How to analyze a volume's access by dataset
https://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2006y.html#35 The Future of CPUs: What's After Multi-Core?

--
virtualization experience starting Jan1968, online at home since Mar1970

Computer Virtual Memory

From: Lynn Wheeler <lynn@garlic.com>
Subject: Computer Virtual Memory
Date: 18 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#90 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory

trivia: about the same time as I/O monitor/trace, I had also implemented CMSBACK (used in numerous internal datacenters, including large single-system image, loosely-coupled US online branch office sales&marketing support HONE complex up in Palo Alto). More than Decade later PC and workstation clients were added and released as WSDF, which morphs into ADSM, then TSM. Current IBM history has precursor to WSDF originating in 1988, however this ibm-main references Melinda's history that it was at least 1983, and I reference that it was already in ver 3 or 4 by 1983:
https://groups.google.com/g/bit.listserv.ibm-main/c/M94FF7teoE4/m/sZ4H74XQsqIJ
Melinda's history here (revision 08/17/92) pg.65
https://www.leeandmelindavarian.com/Melinda#VMHist
Late 70s, I had done the original version, then co-worker helped with the next couple versions. He left IBM and did archive & backup implementations for a VM software company that was chosen by IBM Endicott to remarket ... while support was taken over in 1983 by the two mentioned people in pg65/pg66 of Melinda's history (and references to earlier work expunged, since then expunged again and date reset to 1988). Some old CMSBACK related email
https://www.garlic.com/~lynn/lhwemail.html#cmsback

backup posts
https://www.garlic.com/~lynn/submain.html#backup

move trivia: Shortly after joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and HONE was long time customer. In the morph of CP67->VM370 they dropped and/or simplified lots of features (including dropping tightly-coupled multiprocessor). In 1974, I started migrating lots of CP67 features to VM370 (including kernel reorg for multiprocessor, but not actual support itself). About this time, all the US HONE datacenters were all consolidated up in Palo Alto and enhanced with single-system-image, loosely-coupled, shared DASD with load balancing and fall-over across the complex. In 1975, I moved multiprocessor support to VM370, initially for HONE so they could add a second processor to every system (and with some slight of hand getting twice the throughput from each system, this was in period where MVS documentation was claiming two processor systems only had 1.2-1.5 times throughput of single processor).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

even more trivia: when I transferred to SJR, I got to wander around datacenters in silicon valley, including disk engineering & product test (bldgs 14&15 across the street). They were doing 7x24, pre-scheduled, stand-alone test and said that they had recently tried MVS, but it had 15min mean-time-between failure (in that environment). I offered to rewrite I/O supervisor, making it bullet proof and never fail so they could do any amount of on-demand, concurrent testing. Downside, there got into habit of blaming my software for any problem and I had to spend increasing amount of time playing disk engineer diagnosing their hardware problems. I then wrote internal research report on the I/O reliability changes and happend to mention the MVS MTBF, bringing wrath of MVS group on my head (no question about validty, not allowed to expose MVS issues to upper IBM management). Just before 3880 was about ready to ship to customers, FE had 57 simiulated errors they expected to occur. In all 57 cases, MVS was failing (requiring manual re-ipl) and in 2/3rds of the cases there was no indication of what caused the failure (I didn't feel sorry).

getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

--
virtualization experience starting Jan1968, online at home since Mar1970

Why Bush Invaded Iraq

From: Lynn Wheeler <lynn@garlic.com>
Subject: Why Bush Invaded Iraq
Date: 19 Jul, 2024
Blog: Facebook
Why Bush Invaded Iraq
https://lonecandle.medium.com/why-bush-invaded-iraq-c033e01eb18b
Primarily, Bush was motived by security. Bush feared what Saddam would do with weapons of mass destruction (WMD). In the context of 9/11, an attack on the U.S. homeland that killed thousands of people, American leaders were on edge and didn't want to take any chance that there would be a second attack, especially a second attack with weapons of mass destruction.

... snip ...

... well, cousin to White House Chief of Staff Card ... was dealing with Iraq in the UN and given proof that the WMDs (tracing back to US in the Iran/Iraq war) had been decommissioned ... and provided the proof to (cousin) Card and others. Then was locked up in military hospital. Eventually let out and in 2010 published a book on the (US) decommissioned WMDs.
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

Four years later NYTimes had articles that the decommissioned WMDs had been found early in the invasion, but the information was classified for a decade.
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html

note the military-industrial complex had wanted a war so badly that corporate reps were telling former eastern block countries that if they voted for IRAQ2 invasion in the UN, they would get membership in NATO and (directed appropriation) USAID (can *ONLY* be used for purchase of modern US arms, aka additional congressional gifts to MIC complex not in DOD budget). From the law of unintended consequences, the invaders were told to bypass ammo dumps looking for WMDs, when they got around to going back, over a million metric tons had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

... from truth is stranger than fiction and law of unintended consequences that come back to bite you, much of the radical Islam & ISIS can be considered our own fault, VP Bush in the 80s
https://www.amazon.com/Family-Secrets-Americas-Invisible-Government-ebook/dp/B003NSBMNA/
pg292/loc6057-59:
There was also a calculated decision to use the Saudis as surrogates in the cold war. The United States actually encouraged Saudi efforts to spread the extremist Wahhabi form of Islam as a way of stirring up large Muslim communities in Soviet-controlled countries. (It didn't hurt that Muslim Soviet Asia contained what were believed to be the world's largest undeveloped reserves of oil.)

... snip ...

Saudi radical extremist Islam/Wahhabi loosened on the world ... bin Laden & 15of16 9/11 were Saudis (some claims that 95% of extreme Islam world terrorism is Wahhabi related)
https://en.wikipedia.org/wiki/Wahhabism

Mattis somewhat more PC (political correct)
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg21/loc349-51:
Ayatollah Khomeini's revolutionary regime took hold in Iran by ousting the Shah and swearing hostility against the United States. That same year, the Soviet Union was pouring troops into Afghanistan to prop up a pro-Russian government that was opposed by Sunni Islamist fundamentalists and tribal factions. The United States was supporting Saudi Arabia's involvement in forming a counterweight to Soviet influence.

... snip ...

The Danger of Fibbing Our Way into War. Falsehoods and fat military budgets can make conflict more likely
https://web.archive.org/web/20200317032532/https://www.pogo.org/analysis/2020/01/the-danger-of-fibbing-our-way-into-war/
The Day I Realized I Would Never Find Weapons of Mass Destruction in Iraq
https://www.nytimes.com/2020/01/29/magazine/iraq-weapons-mass-destruction.html

Note In 80s, (father) VP Bush supporting Saddam/Iraq in Iran/Iraq war, then in early 90s (as president), Sat. photo recon analyst tells White House that Saddam was marshaling forces to invade Kuwait. White House says that Saddam would do no such thing and proceeds to discredit the analyst. Later the analyst informs the White House that Saddam was marshaling forces to invade Saudi Arabia, now the White House has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Integrity

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Integrity
Date: 19 Jul, 2024
Blog: Facebook
Since around turn of century, i86 server blade have tended to have ten times the processing power of max. configured mainframe and a large cloud operation will have at least a dozen megadatacenters around the world, each one will have half million (or more) such server blades (megadatacenters have enormous automation, processor equivalent of millions of max configured mainframes managed with 70-80 staff).

Last product we did at IBM was HA/CMP, started out as HA/6000 for NYTimes to move their newspaper system from VAXCluster to RS/6000. I rename it HA/CMP when I start doing scientific/technical cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that have VAXCluster support in same source base with Unix (I do a distributed lock manager with VAXCluster semantics to ease ports). Early Jan1992, in meeting with Oracle CEO and staff, IBM AWD/Hester tells them that we would have 16processor clusters by mid92 and 128processor clusters by ye92. Somebody must have told IBM Kingston supercomputer group because by end of Jan92, cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors; we leave IBM a few months later. Contributing may have been mainframe DB2 group complaining if we were allowed to proceed it would be years ahead of them (trivia: in late 70s I had worked on System/R, original SQL/relational implementation, precursor to DB2).
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-way: 2016MIPS, 128-system: 16,128MIPS


RS6000 RIOS didn't have coherent cache for SMP (so scale-up purely cluster) Somerset/AIM doing Power/PC 1993 announcement includes coherent cache for SMP (so can do cluster, multiprocessor, and clusters of multiprocessors)

note by turn of century, i86 processors had hardware layer that translated i86 instructions into RISC micro-ops for execution, negating gap between CISC & RISC.

https://www.cecs.uci.edu/~papers/mpr/MPR/19991025/131403.pdf
1999 IBM PowerPC 440 Hits 1,000MIPS (>six times z900 processor) 1999, Intel Pentium III hits 2,054MIPS (13times z900 processor)

2003, z990 32 processor 9BIPS (281MIPS/proc) 2003, Pentium4 hits 9,726MIPS, faster than 32 processor z990


More recent mainframe numbers are taking from IBM pubs about increase of newer model compared to previous

z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022
• pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)


z196/jul2010, 50BIPS, 625MIPS/processor
z16/sep2022, 222BIPS, 1111MIPS/processor


12yrs, Z196->Z16, 222/50=4.4times total system BIPS; 1111/625=1.8times per processor MIPS.

2010 E5-2600 server blade benchmarked at 500BIPS, 10 times max configured 2010 z196 and >twice 2022 z16

large cloud operation can have dozen or more megadatacenters around the world, each with half million or more high-end blades ... aggregate a few million TIPS aka million million MIPS (TIPS: thousand BIPS, million MIPS) ... enormous automation, a megadatacenter with 70-80 staff.

trivia: max configured z196 went for $30M ($600,000/BIPS) compared to IBM's base list price for E5-2600 server blade of $1815 ($3.63/BIPS). Shortly later there was server chip press that they were shipping at least half their product directly to cloud megadatacenters (where they assemble their own systems for 1/3rd cost of brand name servers) ... and IBM unloads its server blade business.

more trivia: In 1980, STL (since renamed SVL) is bursting at the seams and 300 people w/3720s, from the IMS group are being transferred to offsite bldg (about half-way between STL & SJR) with dataprocessing back to STL datacenter. They had tried "remote 3270" but found the human factors unacceptable. I get con'ed into doing channel-extender support so they can install channel-attached 3270 controllers in the offsite bldg and there is no perceptible human factors difference between between offsite and in STL.

A side-effect is that the STL 168s for the offsite group have 10-15% improvement in system throughput. The channel-attached 3270 controllers had been previously been spread across all the channels shared with disk controllers and the (relatively) slower 3270 controller channel busy was interfering with disk I/O. The channel-extender boxes were faster than even the disk controllers, drastically reducing the channel busy for 3270 traffic (improving disk and system throughput). There was even talk about placing all STL 3270 controllers behind channel-extender boxes just to improve all systems throughput. The hardware vendor then tries to get IBM to release my support, but there is a group in POK playing with some serial stuff and they get that vetoed.

In 1988, IBM branch office asks if I could help LLNL standardize some stuff they are playing with, which quickly becomes fibre channel standard (including some stuff I had done in 1980), "FCS" initially 1gbit full-duplex, aggregate 200mbytes/sec. Then POK gets their stuff released with ES/9000 as ESCON (when it is already obsolete). Then some POK engineers get involved with FCS and define a heavy-weight protocol that radically reduces native throughput that eventually ships as FICON. Latest, public benchmark I can find is z196 "Peak I/O" that gets 2M IOPS with 104 FICON. About the same time a FCS is announced for E5-2600 server blades claiming over million IOPS (two such FCS have higher throughput than 104 FICON). Note IBM pubs have SAPs (system assist processors that do actual I/O) be held to 70% CPU ... which would be about 1.5M IOPS.

(cloud) megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS and FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

recemt posts mentioning mainframe and non-mainframe throubhput
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#81 Benchmarks

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Integrity

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Integrity
Date: 20 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity

see previous/above reference to having worked on FCS standard in 1988, IBM then releases ESCON in the 90s (when it is already obsolete). Eventually IBM becomes involved in FCS and defines heavy weight protocol for FCS that gets released as FICON. Published benchmark z196 "Peak I/O" gets 2M IOPS with 104 FICON (on 104 FCS) when FCS announced for e5-2600 blades claiming over million IOPS (two FCS have higher throughput than 104 FICON). However IBM pubs recommend holding SAPs to 70% CPU which would be 1.5M IOPS.

IBM claims about mainframe I/O machine somewhat from 3090 when they had to significantly increase number of channels because of 3880 controller had significantly increased channel busy overhead (compared to previous disk 3830 disk controller) ... marketing respun the huge increase in number of channels as 3090 be wonderful I/O machine (when it was really needed to compensate for the huge 3880 channel busy overhead). The huge increase in 3090 channels also required an additional TCM and the 3090 group semi-facetiously said they were going to bill the 3880 group for the increase in 3090 manufacturing costs.

I've commented before about when transferred to IBM SJR I got to wander around datacenters in silicon valley, including disk engineering and product test across the street (bldgs14&15) ... at the time they were running pre-scheduled, 7x24, stand-alone mainframe testing; they mentioned that they recently had tried MVS, but MVS had 15min mean-time-between failure (in that environment). I offered to rewrite I/O supervisor so it was bullet proof and never fail, allowing any amount of concurrent, on-demand testing, greatly improving productivity. At same time as greatly improving reliability and integrity, I also radically cut pathlength (compared to MVS) ... I joked that big reason for SSCH stuff in 370/xa (and later the SAPs) was because MVS I/O pathlength was so bad.

One of the reasons that large cloud operations use (free) linux and use free DBMS ... is cost (aka large cloud operation having dozens of megadatacenters with each megadatacenter having millions and tens of millions of cores). Turn of century IBM financials had mainframe hardware a few percent of revenue and dropping (when in the 80s mainframe hardware was majority of revenue). About a decade ago, mainframe hardware was couple percent of revenue and still dropping, but the mainframe group was 25% of revenue (and 40% of profit) ... nearly all software and services.

Enormous optimization had significantly dropped system costs, people costs, software costs ... so that power/energy was becoming major cost and lots of attention was being paid to getting chip makers to optimize energy/BIPS. To put pressure on i86 chip vendors they've been testing ARM chips (because of the energy/processing optimization done for battery operation). Folklore is AIM/Somerset (aka joint apple, ibm, motorola) was to move apple to power/pc, then because AIM wasn't keeping up with energy/processing, they switched to i86, and then because i86 wasn't keeping up with energy/processing they have switched to their own ARM designed chips,

FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Integrity

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Integrity
Date: 20 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#95 Mainframe Integrity

Late 80s, senior disk engineer got talk scheduled at communication group internal, world-wide, annual conference supposedly on 3174, but opened the talk that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing datacenters to more distributed computing friendly platforms, with a drop in disk sales. The disk division had come up with a number of solutions but were constantly vetoed by the communication group (with their corporate strategic ownership of everything that crossed datacenter walls and fiercely fighting off client/server and distributed computing trying to preserve their dumb terminals paradigm). It wasn't just disks and a couple years later, IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company:
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk (corporate hdqtrs) asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

... other trivia: disk division vp of software partial countermeasure to communication group was investing in distributed computing startups that would use IBM disks ... and would periodically ask us to visit his investments to see if we could provide any help

mention periodically 360 i/o with lots of trade-offs because technology from period ... little storage capacity ... everything back in processor memory with. half-duplex channel, enormous protocol chatter between controller and processor memory over half-duplex channel.

for 1980 channel-extender went to full-duplex links .. allowed streaming concurrently in both directions ... local memory at controller end ... minimizing enormous protocol latency ... as transfer speeds went to gbits, gbytes, tens gbytes .... protocol chatter latency becomes dominate (especially if half-duplex) ... in 3880 case very slow processor worsening chatter protocol latency. bldg14&15, disk engineers complained bitterly about bean counter executives forcing cheap, slow control microprocessor for 3880.

currently there has been no CKD disks made for decades ... all being simulated on industry standard fixed-block devices ... CKD i/o besides slow FICON protocol also has CKD simulation overhead (3380 starting to be fixed-block, can be seen in records/track formulas with record length having to be rounded up to cell size). about same time being asked to help LLNL with FCS standardization, father of 801/RISC asks me if I can help him with disk "wide-head" ... 3380 originally had 20track spacing between each data track, it was then cut to 10track spacing ... doubling capacity and number of cylinders-tracks ... then cut again for triple capacity. "wide-head" would format 16 closely spaced data tracks and servo track; wide-head would follow servo-tracks on each side ... data transfer in parallel on all 16 tracks. Problem was mainframe channels couldn't handle 50mbyte/sec (similar to RAID transfer rates)

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track search, etc, posts
https://www.garlic.com/~lynn/submain.html#dasd
FCS and FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Integrity

From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Integrity
Date: 20 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#95 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#96 Mainframe Integrity

Not long after leaving IBM, I was brought in as consultant to small client/server startup. Two of former Oracle employees that we had worked with on HA/CMP cluster scale-up were there responsible for something they called "commerce server" and they wanted to be able to do payment transactions, the startup had also invented this technology they called "SSL" they wanted to use, it is now frequently called "electronic commerce". I had responsibility for everything between ecommerce servers and the financial payment networks. Afterwards I put together a talk on "Why Internet Wasn't Business Critical Dataprocessing" (that the Internet Standards RFC editor, Postel sponsored at ISI/USC) based on the work I had to do, multi-level security layers, multi-redundant operation, diagnostics, process and procedural software and documents.

trivia: when doing HA/CMP had to do fault and exploit/vulnerability resistant/resilient. Jim Gray in 1984 had published study of availability/outages and found hardware was getting more reliable and outages were shifting to people mistakes and environmental (tornadoes, hurricanes, floods, earthquakes, power outages, etc) and so was also doing a lot on people operation and geographically separated operation (coining disaster survivability and geographic survivability when out marketing) ... and the IBM S/88 product administrator started taking us around to their customers and also asked me to do a section for the corporate continuous availability product document (but it got pulled because Rochester/AS400 and POK/mainframe complained that at the time they couldn't meet the requirements).
https://www.garlic.com/~lynn/grayft84.pdf

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
availability posts
https://www.garlic.com/~lynn/submain.html#available
"e-commerce" paymnet gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

a couple posts referring to "why internet isn't business critical dataprocessing", postel, ha/cmp
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022e.html#105 FedEx to Stop Using Mainframes, Close All Data Centers By 2024
https://www.garlic.com/~lynn/2021e.html#74 WEB Security

--
virtualization experience starting Jan1968, online at home since Mar1970

Why Bush Invaded Iraq

From: Lynn Wheeler <lynn@garlic.com>
Subject: Why Bush Invaded Iraq
Date: 21 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#93 Why Bush Invaded Iraq

History Is Un-American. Real Americans Create Their Own Futures
https://www.linkedin.com/pulse/history-un-american-real-americans-create-own-futures-lynn-wheeler/
and
https://bracingviews.com/2023/01/04/history-is-un-american/
Wars and More Wars: The Sorry U.S. History in the Middle East
https://www.counterpunch.org/2022/12/30/wars-and-more-wars-the-sorry-u-s-history-in-the-middle-east/

The World Crisis, Vol. 1, Churchill explains the mess in middle east started with move from 13.5in to 15in Naval guns (leading to moving from coal to oil)
https://www.amazon.com/Crisis-1911-1914-Winston-Churchill-Collection-ebook/dp/B07H18FWXR/
loc2012-14:
From the beginning there appeared a ship carrying ten 15-inch guns, and therefore at least 600 feet long with room inside her for engines which would drive her 21 knots and capacity to carry armour which on the armoured belt, the turrets and the conning tower would reach the thickness unprecedented in the British Service of 13 inches.

loc2087-89:
To build any large additional number of oil-burning ships meant basing our naval supremacy upon oil. But oil was not found in appreciable quantities in our islands. If we required it, we must carry it by sea in peace or war from distant countries.

loc2151-56:
This led to enormous expense and to tremendous opposition on the Naval Estimates. Yet it was absolutely impossible to turn back. We could only fight our way forward, and finally we found our way to the Anglo-Persian Oil agreement and contract, which for an initial investment of two millions of public money (subsequently increased to five millions) has not only secured to the Navy a very substantial proportion of its oil supply, but has led to the acquisition by the Government of a controlling share in oil properties and interests which are at present valued at scores of millions sterling, and also to very considerable economies, which are still continuing, in the purchase price of Admiralty oil.

... snip ...

When the newly elected Iranian democratic government wanted to review the Anglo-Persian contract, US arranged coup and backed Shah as front
https://unredacted.com/2018/03/19/cia-caught-between-operational-security-and-analytical-quality-in-1953-iran-coup-planning/
https://en.wikipedia.org/wiki/Kermit_Roosevelt,_Jr%2E
https://en.wikipedia.org/wiki/1953_Iranian_coup_d%27%C3%A9tat
... and Schwarzkoph (senior) training of the secret police to help keep Shah in power
https://en.wikipedia.org/wiki/SAVAK
Savak Agent Describes How He Tortured Hundreds
https://www.nytimes.com/1979/06/18/archives/savak-agent-describes-how-he-tortured-hundreds-trial-is-in-a-mosque.html

Iran people eventually revolt against the horribly oppressive, (US backed) autocratic government.

CIA Director Colby wouldn't approve the "Team B" analysis inflating USSR military capability justifying huge DOD budget increase. Rumsfeld got Colby replaced with Bush (that would approve "Team B" analysis), after Rumsfeld replaces Colby, Rumsfeld resigns as White House chief of staff to become SECDEF (and is replaced by his assistant Cheney)
https://en.wikipedia.org/wiki/Team_B
Then in the 80s, former CIA director H.W. is VP, he and Rumsfeld are involved in supporting Iraq in the Iran/Iraq war
http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War
including WMDs (note picture of Rumsfeld with Saddam)
http://en.wikipedia.org/wiki/United_States_support_for_Iraq_during_the_Iran%E2%80%93Iraq_war

VP and former CIA director repeatedly claims no knowledge of
http://en.wikipedia.org/wiki/Iran%E2%80%93Contra_affair
because he was fulltime administration point person deregulating financial industry ... creating S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of his family
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260

Republicans and Saudis bailing out the Bushes.

In the early 90s, H.W. is president and Cheney is SECDEF. Sat. photo recon analyst told White House that Saddam was marshaling forces to invade Kuwait. White House said that Saddam would do no such thing and proceeded to discredit the analyst. Later the analyst informed the White House that Saddam was marshaling forces to invade Saudi Arabia, now the White House has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/

... roll forward ... Bush2 is president and presides over the huge cut in taxes (1st time taxes were cut to NOT pay for two wars), huge increase in spending, explosion in debt (the fiscal responsibility act allowed to lapse ... had been on its way to eliminating all federal debt), the economic mess (70 times larger than his father's S&L crisis) and the forever wars, Cheney is VP, Rumsfeld is SECDEF (again) and one of the Team B members is deputy SECDEF (and major architect of Iraq policy).
https://en.wikipedia.org/wiki/Paul_Wolfowitz

Team B "posts"
https://www.garlic.com/~lynn/submisc.html#team.b
military-industrial(-congressionaL) complex "posts"
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
S&L crisis "posts"
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
fiscal responsiblity act "posts"
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
economic mess "posts"
https://www.garlic.com/~lynn/submisc.html#economic.mess
perpetual war "posts"
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD "posts"
httpS://www.garlic.com/~lynn/submisc.html#wmd

--
virtualization experience starting Jan1968, online at home since Mar1970

Interdata Clone IBM Telecommunication Controller

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Interdata Clone IBM Telecommunication Controller
Date: 21 Jul, 2024
Blog: Facebook
took two credit hr intro to fortran/computers, end of the semester hired to rewrite 1401 MPIO for 360/30. Univ getting 360/67 for tss/360. to replace 709/1401 and temporary got 360/30 (that had 1401 microcode emulation) to replace 1401, pending arrival of 360/67 (Univ shutdown datacenter on weekend, and I would have it dedicated). Within a year of taking intro class, 360/67 showed up and I was hired fulltime responsibility for OS/360 (tss/360 didn't come to production, so ran as 360/65 with os/360) ... and I continued to have my dedicated weekend time. Student fortran ran under second on 709 (tape to tape), but initial over a minute on 360/65. I install HASP and it cuts time in half. I then start revamping stage2 sysgen to place datasets and PDS members to optimize disk seek and multi-track searches, cutting another 2/3rds to 12.9secs; never got better than 709 until I install univ of waterloo WATFOR.

CSC comes out to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs), which I mostly got to play with during my weekend dedicated time. First few months I mostly spent rewriting CP67 pathlengths for running os/360 in virtual machine; test os/360 stream 322secs, initially ran 856secs virtually; CP67 CPU 534secs got down to CP67 CPU 113secs. CP67 had 1052 & 2741 support with automatic terminal type identification (controller SAD CCW to switch port terminal type). Univ had some TTY terminals so I added TTY support integrated with automatic terminal type. I then wanted to have single dial-up number ("hunt group") for all terminals ... but IBM had taken short-cut and hardwired port line-speed ... which kicks off univ. program to build clone controller, building channel interface board for Interdata3 programmed to emulate IBM controller with inclusion of automatic line speed.

Later upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Four of us get written up responsible for (some part of) IBM clone controller business ... initially sold by Interdata and then by Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

early glitch with clone controller development ... was putting characters leading bits coming off the line starting into high order bit position ... however IBM 360 telecommunication controllers put characters leading bits starting into low order bit positions so (ASCII) character bytes arriving into computer memory were bit-reversed bytes (and IBM EBCDIC<->ASCII translate tables handled the reversed bit characters). For controller compatibility, had to follow IBM 360 telecommunication controller convention.

The IBM terminals weren't actually EBCDIC, but tilt-rotate codes so it wasn't EBCDIC and had to be translated between EBCDIC and tilt-rotate code.

IBM controller communcation controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

other recent posts
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2023f.html#102 MPIO, Student Fortran, SYSGENS, CP67, 370 Virtual Memory
https://www.garlic.com/~lynn/2023f.html#34 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#64 Computing Career

--
virtualization experience starting Jan1968, online at home since Mar1970

Chipsandcheese article on the CDC6600

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Chipsandcheese article on the CDC6600
Newsgroups: comp.arch
Date: Mon, 22 Jul 2024 12:57:03 -1000
Michael S <already5chosen@yahoo.com> writes:
At the end, the influence of 6600 on computers we use today is close to zero. On the other hand, influence of S/360 Model 85 is massive and influence of S/360 Model 91 is significant, although far less than the credit it is often given in popular articles. Back at their time 6600 was huge success and both Model 85 and Model 91 were probably considered failures.

recent comp.arch posts
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#1 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#2 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#54 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#55 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#58 Architectural implications of locate mode I/O and channels
https://www.garlic.com/~lynn/2024d.html#61 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#83 Continuations

Amdahl wins battle to make ACS, 360 compatible, then shortly later ACS was shutdown (folklore is executives felt it would advance state-of-the-art too fast and IBM would loose control of the market), Amdahl leaves IBM shortly later ... lots of history, including some of the ACS features show up more than two decades later with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html
Of the 26,000 IBM computer systems in use, 16,000 were S/360 models (that is, over 60%). [Fig. 1.311.2]

Of the general-purpose systems having the largest fraction of total installed value, the IBM S/360 Model 30 was ranked first with 12% (rising to 17% in 1969). The S/360 Model 40 was ranked second with 11% (rising to almost 15% in 1970). [Figs. 2.10.4 and 2.10.5]

Of the number of operations per second in use, the IBM S/360 Model 65 ranked first with 23%. The Univac 1108 ranked second with slightly over 14%, and the CDC 6600 ranked third with 10%. [Figs. 2.10.6 and 2.10.7]


... snip ...

old email:

Date: 04/23/81 09:57:42
To: wheeler

your ramblings concerning the corp(se?) showed up in my reader yesterday. like all good net people, i passed them along to 3 other people. like rabbits interesting things seem to multiply on the net. many of us here in pok experience the sort of feelings your mail seems so burdened by: the company, from our point of view, is out of control. i think the word will reach higher only when the almighty $$$ impact starts to hit. but maybe it never will. its hard to imagine one stuffed company president saying to another (our) stuffed company president i think i'll buy from those inovative freaks down the street. '(i am not defending the mess that surrounds us, just trying to understand why only some of us seem to see it).

bob tomasulo and dave anderson, the two poeple responsible for the model 91 and the (incredible but killed) hawk project, just left pok for the new stc computer company. management reaction: when dave told them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they tried to keep bob by telling him he shouldn't go (the reward system in pok could be a subject of long correspondence). when he left, the management position was 'he wasn't doing anything anyway. '

in some sense true. but we haven't built an interesting high-speed machine in 10 years. look at the 85/165/168/3033/trout. all the same machine with treaks here and there. and the hordes continue to sweep in with faster and faster machines. true, endicott plans to bring the low/middle into the current high-end arena, but then where is the high-end product development?


... snip ... top of post, old email index

first part of 70s, IBM had the Future System effort, completely different from 370 and going to completely replace it (internal politics during FS was killing off 370 products, the lack of new 370 during FS is credited with giving clone 370 makers their market foothold, including Amdahl) ... when FS implodes there was mad rush to get new stuff back into the 370 product pipelines including kicking off quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

370/xa was referred to "811" for the architecture/design documents' Nov1978 publication date, nearly all of it was to address MVS short comings (aka head of POK had shortly before managed to convince corporate to kill the VM370 product, shutdown the development group and have all the people transferred to POK for MVS/XA; Endicott did eventually manage to save the VM370 product mission ... for the "mid-range).

trivia: when I joined IBM, one of my hobbies was enhanced production operating system for internal datacenters (including world-wide sales&market support HONE). In the original morph of CP67->VM370, lots of stuff was dropped and/or simplified (including multiprocessor support). In 1974, I start moving a bunch of stuff to VM370R2, including kernel reorg for multiprocessor support, but not actual multiprocessor support itself.

In 1975, I move my CSC/VM system to VM370R3 and add multiprocessor support, originally for the US consolidated sales&marketing support HONE datacenters up in Palo Alto (the consolidated US systems had been consolidated into a single system image, loosely-coupled, shared DASD operation with load-balancing and fall-over (one of the largest such complexes in the world). The multiprocessor support allowed them to add a 2nd processor to each system (making it the largest in the world, airlines' TPF had similar shared-dasd complexes, but TPF didn't get SMP support for another decade). I had done some hacks in order to get two processor system twice the throughput of single process (at the time MVS documentation was two processor MVS had 1.2-1.5 times the thoughput of a single processor).

With the implosion of FS (and the demise of the VM370 development group) ... I got roped into helping with a 16-processor 370 SMP and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK, it could be decades before the POK favorite son operating system (MVS) would have (effective) 16-processor support (POK doesn't ship a 16-processor system until after the turn of the century) and the head of POK invites some of us to never visit POK again (and tells the 3033 processor engineers, heads down on 3033 and no distractions). Some POK executives were also out bullying internal datacenters (including HONE) that they had to convert from VM370 to MVS. Once 3033 was out the door, the 3033 engineers start on trout/3090.

trivia: Jan1979, I was con'ed into doing a 6600 fortran benchmark on an engineering IBM4341 (mid-range), for a national lab that was looking at getting 70 for a compute farm (sort of the leading edge of the coming cluster supercomputing tsunami) ... the engineering 4341 benchmark was slightly slower than 6600 but production machines that shipped, were slightly faster.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

recent posts mentioning 4341 compute farm
https://www.garlic.com/~lynn/2024d.html#85 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#21 IBM CSC and MIT MULTICS
https://www.garlic.com/~lynn/2024d.html#15 Mid-Range Market
https://www.garlic.com/~lynn/2024d.html#5 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#107 architectural goals, Byte Addressability And Beyond
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#43 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#48 VAX MIPS whatever they were, indirection in old architectures
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#107 Cluster and Distributed Computing
https://www.garlic.com/~lynn/2023g.html#61 PDS Directory Multi-track Search
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#22 Vintage Cray
https://www.garlic.com/~lynn/2023g.html#15 Vintage IBM 4300
https://www.garlic.com/~lynn/2023f.html#12 Internet
https://www.garlic.com/~lynn/2023e.html#80 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#71 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#59 801/RISC and Mid-range
https://www.garlic.com/~lynn/2023e.html#52 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#47 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#102 Typing, Keyboards, Computers
https://www.garlic.com/~lynn/2023d.html#93 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#86 5th flr Multics & 4th flr science center
https://www.garlic.com/~lynn/2023d.html#84 The Control Data 6600
https://www.garlic.com/~lynn/2023d.html#19 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#1 Some 3033 (and other) Trivia
https://www.garlic.com/~lynn/2023c.html#46 IBM DASD
https://www.garlic.com/~lynn/2023b.html#78 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022h.html#48 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#8 IBM 4341
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#28 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022f.html#26 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#11 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#101 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2022d.html#86 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#78 US Takes Supercomputer Top Spot With First True Exascale Machine
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#18 IBM Left Behind
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#112 GM C4 and IBM HA/CMP
https://www.garlic.com/~lynn/2022.html#95 Latency and Throughput
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#133 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#107 IBM Future System
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2021h.html#107 3277 graphics
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021f.html#30 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021d.html#57 IBM 370
https://www.garlic.com/~lynn/2021c.html#50 IBM CEO
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#90 IBM Innovation
https://www.garlic.com/~lynn/2021b.html#55 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2021b.html#24 IBM Recruiting
https://www.garlic.com/~lynn/2021b.html#1 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2021.html#76 4341 Benchmarks
https://www.garlic.com/~lynn/2021.html#53 Amdahl Computers
https://www.garlic.com/~lynn/2021.html#2 How an obscure British PC maker invented ARM and changed the world
https://www.garlic.com/~lynn/2020.html#38 Early mainframe security

--
virtualization experience starting Jan1968, online at home since Mar1970

Chipsandcheese article on the CDC6600

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Chipsandcheese article on the CDC6600
Newsgroups: comp.arch
Date: Mon, 22 Jul 2024 13:09:12 -1000
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
From what I read, the Model 91 was considered a technical (and marketing) success, but commercially a failure (sold at a loss, and therefore quickly canceled). But apparently the market benefit was enough that they then built the 360/195 and 370/195. 15 91s were built and about 20 195s. The 195 was withdrawn in 1977, and AFAIK that was the end of IBM's supercomputing ambitions for a while. This may have had to do with the introduction of the Cray-1 in 1976 or the IBM 3033 in 1977. IBM eventually announced the optional vector facility for the 3090 in 1985. OoO processing vanished from S/360 successors with the 195 and only reappeared quite a while after it had appeared in Intel and RISC CPUs.

recent comp.arch posts
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#1 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#2 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#54 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#55 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#58 Architectural implications of locate mode I/O and channels
https://www.garlic.com/~lynn/2024d.html#61 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#83 Continuations
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600

... also shortly after joining IBM, was asked if I could help with project to multi-thread 370/195 .... also from acs end page:
https://people.computing.clemson.edu/~mark/acs_end.html
Sidebar: Multithreading

In summer 1968, Ed Sussenguth investigated making the ACS/360 into a multithreaded design by adding a second instruction counter and a second set of registers to the simulator. Instructions were tagged with an additional "red/blue" bit to designate the instruction stream and register set; and, as was expected, the utilization of the functional units increased since more independent instructions were available.

IBM patents and disclosures on multithreading include:

US Patent 3,728,692, J.W. Fennel, Jr., "Instruction selection in a two-program counter instruction unit," filed August 1971, and issued April 1973.

US Patent 3,771,138, J.O. Celtruda, et al., "Apparatus and method for serializing instructions from two independent instruction streams," filed August 1971, and issued November 1973. [Note that John Earle is one of the inventors listed on the '138.]

"Multiple instruction stream uniprocessor," IBM Technical Disclosure Bulletin, January 1976, 2pp. [for S/370]


... snip ...

370/195 had 64 instruction pipeline and could do out-of-order ... but didn't have branch prediction or speculative executive ... so conditional branches drained pipeline and most codes ran at half 195 rated throubhput. Simulating multiprocessor with red/blue instruction streams ... could get two half-rate streams running 195 a full speed (modulo MVT/MVS two processor support only having 1.2-1.5 throughput of single processor). The whole thing was shutdown when it was decided to add virtual memory to all 370s ... which was decided not practical for 195.

z196 (july2010) documents claim that half of the per-processor improvement in mip rate (compared to the previous z10) is due to introduction of things like out-of-order.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
smp, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some posts mentioning multiprocessor/multi-thread 370/195 and canceled when decision was made to add virtual memory to all 370s
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#52 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022d.html#22 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#60 370/195
https://www.garlic.com/~lynn/2022.html#31 370/195
https://www.garlic.com/~lynn/2017.html#90 The ICL 2900
https://www.garlic.com/~lynn/2017.html#3 Is multiprocessing better then multithreading?
https://www.garlic.com/~lynn/2016h.html#45 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#7 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015c.html#69 A New Performance Model ?
https://www.garlic.com/~lynn/2015c.html#26 OT: Digital? Cloud? Modern And Cost-Effective? Surprise! It's The Mainframe - Forbes
https://www.garlic.com/~lynn/2014m.html#105 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2013m.html#51 50,000 x86 operating system on single mainframe
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns

--
virtualization experience starting Jan1968, online at home since Mar1970

Chipsandcheese article on the CDC6600

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Chipsandcheese article on the CDC6600
Newsgroups: comp.arch
Date: Mon, 22 Jul 2024 17:01:10 -1000
mitchalsup@aol.com (MitchAlsup1) writes:
The largest a team should be is 11 + leader--Jesus tried for 12 and failed.

re:
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600

trivia: science center wanted to get a 360/50 to modify for virtual memory, but all the extra 360/50s were going to FAA ATC ... and so they had to settle for 360/40 ... they implemented virtual memory with associative array that held process-ID and virtual page number for each real page (compared to Atlas associative array, which just had virtual page number for each real page... effectively just single large virtual address space).

the official IBM operating system for (standard virtual memory) 360/67 was TSS/360 which peaked around 1200 people at a time when the science center had 12 people (that included secretary) morphing CP/40 into CP/67.

Melinda's history website
https://www.leeandmelindavarian.com/Melinda#VMHist
description of CP/40 (for modified 360/40)
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

other recent comp.arch posts
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#1 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#2 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024d.html#24 ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#32 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#34 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#39 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#41 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#48 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#50 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#54 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#55 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#58 Architectural implications of locate mode I/O and channels
https://www.garlic.com/~lynn/2024d.html#61 Architectural implications of locate mode I/O
https://www.garlic.com/~lynn/2024d.html#83 Continuations

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 360/40, 360/50, 360/65, 360/67, 360/75

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 360/40, 360/50, 360/65, 360/67, 360/75
Date: 23 Jul, 2024
Blog: Facebook
trivia: science center wanted to get a 360/50 to modify for virtual memory, but all the extra 360/50s were going to FAA ATC ... and so they had to settle for 360/40 ... they implemented virtual memory with associative array that held process-ID and virtual page number for each real page (compared to Atlas associative array, which just had virtual page number for each real page... effectively just single large virtual address space).

the official IBM operating system for (standard virtual memory) 360/67 was TSS/360 which peaked around 1200 people at a time when the science center had 12 people (that included secretary) morphing CP/40 into CP/67.

Melinda's history website
https://www.leeandmelindavarian.com/Melinda#VMHist
description of CP/40 (for modified 360/40)
https://www.leeandmelindavarian.com/Melinda/JimMarch/CP40_The_Origin_of_VM370.pdf

took two credit hr intro to fortran/computers, end of the semester hired to rewrite 1401 MPIO for 360/30. Univ getting 360/67 for tss/360. to replace 709/1401 and temporary got 360/30 (that had 1401 microcode emulation) to replace 1401, pending arrival of 360/67 (Univ shutdown datacenter on weekend, and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard). Within a year of taking intro class, 360/67 showed up and I was hired fulltime responsibility for OS/360 (tss/360 didn't come to production, so ran as 360/65 with os/360) ... and I continued to have my dedicated weekend time. Student fortran ran under second on 709 (tape to tape), but initial over a minute on 360/65. I install HASP and it cuts time in half. I then start revamping stage2 sysgen to place datasets and PDS members to optimize disk seek and multi-track searches, cutting another 2/3rds to 12.9secs; never got better than 709 until I install univ of waterloo WATFOR. My 1st SYSGEN was R9.5MFT, then started redoing stage2 sysgen for R11MFT. MVT shows up with R12 but I didn't do MVT gen until R15/16 (15/16 disk format shows up being able to specify VTOC cyl ... aka place other than cyl0 to reduce avg. arm seek).

CSC comes out to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs), which I mostly got to play with during my weekend dedicated time. First few months I mostly spent rewriting CP67 pathlengths for running os/360 in virtual machine; test os/360 stream 322secs, initially ran 856secs virtually; CP67 CPU 534secs got down to CP67 CPU 113secs. CP67 had 1052 & 2741 support with automatic terminal type identification (controller SAD CCW to switch port terminal type). Univ had some TTY terminals so I added TTY support integrated with automatic terminal type. I then wanted to have single dial-up number ("hunt group") for all terminals ... but IBM had taken short-cut and hardwired port line-speed ... which kicks off univ. program to build clone controller, building channel interface board for Interdata/3, programmed to emulate IBM controller with inclusion of automatic line speed. Later upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Four of us get written up responsible for (some part of) IBM clone controller business ... initially sold by Interdata and then by Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

early glitch with clone controller development ... was putting character leading bits coming off the line starting into high order bit position ... however IBM 360 telecommunication controllers put character leading bits starting into low order bit positions so (ASCII) character bytes arriving into computer memory were bit-reversed bytes (and IBM EBCDIC<->ASCII translate tables handled the reversed bit characters). For controller compatibility, had to follow IBM 360 telecommunication controller convention. The IBM terminals weren't actually EBCDIC, but tilt-rotate codes so it wasn't EBCDIC and had to be translated between EBCDIC and tilt-rotate code.

Along with R18 MVT, I add terminal support to HASP and implement an editor with CMS edit syntax (HASP and CMS programming environments totally different) for CRJE facility.

Before I graduate, I'm hired fulltime into Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I thought Renton was largest datacenter in the world (couple hundred million), 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room (former 727? assembly line?) ... Renton did have one 360/75 that was used for classified work (black rope around 75 area, when classified work was running, guards at perimeter and heavy black felt draped over 75 lights and 1403 exposed print areas). Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room and install 360/67 for me to play with when I wasn't doing other stuff). When I graduate, I join the science center instead of staying with Boeing CFO.

Later in the early 80s, I was introduced to John Boyd and would sponsor his briefings at IBM. One of his stories was being very vocal that the electronics across the trail wouldn't work ... and possibly as punishment is put in command of "spook base" (about the same time I'm at Boeing). Boyd biography says that "spook base" was a $2.5B "windfall" for IBM (ten times Renton).

trivia: 360s were originally to be ASCII machines ... but the ASCII unit record gear wasn't ready ... so had to use old tab BCD gear (and EBCDIC) ... biggest computer goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

Science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html

some recent posts mentioning Univ 709/1401/36030/36067, Boeing CFO, Renton datacenter, etc
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#80 MVT, MVS, MVS/XA & Posix support
https://www.garlic.com/~lynn/2023f.html#83 360 CARD IPL
https://www.garlic.com/~lynn/2023e.html#64 Computing Career
https://www.garlic.com/~lynn/2023e.html#54 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#88 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#69 Fortran, IBM 1130
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#73 Dataprocessing 48hr shift
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#75 IBM Mainframe
https://www.garlic.com/~lynn/2023b.html#15 IBM User Group, SHARE
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022.html#12 Programming Skills

--
virtualization experience starting Jan1968, online at home since Mar1970

What happens to old MAC assignments?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: What happens to old MAC assignments?
Newsgroups: alt.folklore.computers
Date: Wed, 24 Jul 2024 14:07:17 -1000
Lars Poulsen <lars@beagle-ears.com> writes:
If the two machines have the same IP address but different MAC addresses, the new machine cannot take over so long as the ARP entry survives. Old implementations would keep the ARP table entry as long as it was used. Newer implementations re-arp after half the entry's lifetime has expired. Been there, done that as the implementor of an embedded IP stack.

If the new machine has the same MAC address as the old one, old MAC level routes may survive in one or more switching hubs. Best cure for that is sending a broadcast from the new location. (Seen as an issue in a wireless network when a mobile end node moves between access points.)


circa 1990, when we were doing IBM's HA/CMP ... and doing IP-address take-over (as part of failure handling) ... while (BSD) Reno/Tahoe 4.3 TCP/IP stack implementations had ARP-cache time-out, it had special code that would save the previous MAC&IP address pair (to avoid calling ARP-cache routine, that special save never timed-out) ... in various client/server scenarios, some (Reno/Tahoe based TCP/IP) client use was the same server address and therefor there was (almost) never a call to the ARP-cache routine. We had to do special hack to do pings to clients from different IP-address (to force a call to the ARP-cache routine) ... when take-over was involved.

IBM High Availability Cluster Multiprocessing
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

posts mentioning HA/CMP, Reno/Tahoe and ARP-cache
https://www.garlic.com/~lynn/2024d.html#84 ATT/SUN and Open System Foundation
https://www.garlic.com/~lynn/2014.html#29 Hardware failures (was Re: Scary Sysprogs ...)
https://www.garlic.com/~lynn/2012l.html#16 X86 server
https://www.garlic.com/~lynn/2011p.html#42 z/OS's basis for TCP/IP
https://www.garlic.com/~lynn/2009p.html#40 Wireless security (somehow thread-drifted from Re: Getting Out Hard Drive in Real Old Computer)
https://www.garlic.com/~lynn/2009l.html#7 VTAM security issue
https://www.garlic.com/~lynn/2009b.html#26 A question about arp tables
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2003c.html#21 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2000b.html#45 OSA-Express Gigabit Ethernet card planning
https://www.garlic.com/~lynn/99.html#54 Fault Tolerance
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/aadsm23.htm#24 Reliable Connections Are Not

--
virtualization experience starting Jan1968, online at home since Mar1970

Biggest Computer Goof Ever

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Biggest Computer Goof Ever
Date: 24 Jul, 2024
Blog: Linkedin
trivia: 360s were originally to be ASCII machines ... but the ASCII unit record gear wasn't ready ... so had to use old tab BCD gear (and EBCDIC) ... biggest computer goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

Late 80s, senior disk engineer got talk scheduled at communication group internal, world-wide, annual conference supposedly on 3174, but opened the talk that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing data fleeing datacenters to more distributed computing friendly platforms, with a drop in disk sales. The disk division had come up with a number of solutions but were constantly vetoed by the communication group (with their corporate strategic ownership of everything that crossed datacenter walls and fiercely fighting off client/server and distributed computing trying to preserve their dumb terminals paradigm). Disk Division executive partial countermeasure was investing in distributed computer startups that would use IBM disks and periodically asked us to stop by his investments to see if there was any help we could provide.

It wasn't just disks and a couple years later, IBM has one of the largest losses in the history of US companies and was being reorganized into the 13 "baby blues" in preparation for breaking up the company:
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk (corporate hdqtrs) asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).

posts mentioning communication group battling client/server and distributed computing ... trying to preserve their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning biggest computer goof ever
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#14 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024b.html#113 EBCDIC
https://www.garlic.com/~lynn/2024.html#102 EBCDIC Card Punch Format
https://www.garlic.com/~lynn/2023e.html#82 Saving mainframe (EBCDIC) files
https://www.garlic.com/~lynn/2023e.html#24 EBCDIC "Commputer Goof"
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#51 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#91 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021d.html#92 EBCDIC Trivia

--
virtualization experience starting Jan1968, online at home since Mar1970

Private Equity

From: Lynn Wheeler <lynn@garlic.com>
Subject: Private Equity
Date: 25 Jul, 2024
Blog: Linkedin
... some recent Private Equity

Private Equity Firms Increasingly Held Responsible for Healthcare Fraud
https://waterskraus.com/private-equity-firms-increasingly-held-responsible-for-healthcare-fraud/

Pitchbook Report Attempting to Defend Private Equity Role in Healthcare Suffers from Fundamental, Fatal Analytical Flaw
https://www.nakedcapitalism.com/2024/07/pitchbook-report-attempting-to-defend-private-equity-role-in-healthcare-suffers-from-fundamental-fatal-analytical-flaw

Private Equity Puts Debt Everywhere
https://archive.ph/ivzoQ#selection-1389.0-1394.0

Private Equity Gets Creative to Buy Time for More Gains. Clients Say Pay Me Now https://archive.ph/9x4Be

Private equity is devouring the economy as boomer entrepreneurs exit--but a new approach to employee ownership can change that
https://archive.ph/rG3f7

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalsm

some specific posts mentioning private equity and health care market, hospitals, doctors practices, etc.
https://www.garlic.com/~lynn/2024.html#99 A Look at Private Equity's Medicare Advantage Grifting
https://www.garlic.com/~lynn/2024.html#45 Hospitals owned by private equity are harming patients
https://www.garlic.com/~lynn/2023f.html#1 How U.S. Hospitals Undercut Public Health
https://www.garlic.com/~lynn/2023.html#23 Health Care in Crisis: Warning! US Capitalism is Lethal
https://www.garlic.com/~lynn/2023.html#8 Ponzi Hospitals and Counterfeit Capitalism
https://www.garlic.com/~lynn/2022h.html#119 Patients for Profit: How Private Equity Hijacked Health Care
https://www.garlic.com/~lynn/2022h.html#76 Parasitic Private Equity is Consuming U.S. Health Care from the Inside Out
https://www.garlic.com/~lynn/2022g.html#25 Another Private Equity-Style Hospital Raid Kills a Busy Urban Hospital
https://www.garlic.com/~lynn/2022f.html#100 When Private Equity Takes Over a Nursing Home
https://www.garlic.com/~lynn/2022d.html#41 Your Money and Your Life: Private Equity Blasts Ethical Boundaries of American Medicine
https://www.garlic.com/~lynn/2022c.html#103 The Private Equity Giant KKR Bought Hundreds Of Homes For People With Disabilities
https://www.garlic.com/~lynn/2021g.html#64 Private Equity Now Buying Up Primary Care Practices
https://www.garlic.com/~lynn/2021g.html#40 Why do people hate universal health care? It turns out -- they don't
https://www.garlic.com/~lynn/2021f.html#7 The Rise of Private Equity
https://www.garlic.com/~lynn/2019e.html#21 Private Equity and Surprise Medical Billing
https://www.garlic.com/~lynn/2019d.html#43 Private Equity: The Perps Behind Destructive Hospital Surprise Billing
https://www.garlic.com/~lynn/2019c.html#83 Americans Die Younger Despite Spending the Most on Health Care

--
virtualization experience starting Jan1968, online at home since Mar1970

Biggest Computer Goof Ever

From: Lynn Wheeler <lynn@garlic.com>
Subject: Biggest Computer Goof Ever
Date: 26 Jul, 2024
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2024d.html#105 Biggest Computer Goof Ever

Learson named in the "biggest computer goof ever"
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
... then he is CEO and tried (and failed) to block the bureaucrats, careerists and MBAs from destroying Watson culture and legacy ... then 20yrs later, IBM has one of the largest losses in the history of US companies
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

post about as undergraduate in 60s, doing ASCII terminal support and then clone telecommunication controller using interdata
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller

then univ. library gets an ONR grant to do online catalog, some of the money goes for 2321 datacell ... was also selected as betatest site for original CICS product ... and CICS support added to tasks. One of the 1st problems was CICS wouldn't come up ... turns out that CICS had some hard coded BDAM dataset options that CICS didn't document and library had built their datasets with different set of options. Yelavich URLs gone 404, but live on at wayback machine
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm

posts mentioning doing telecommunication clone controller
https://www.garlic.com/~lynn/submain.html#360pcm
posts mentioning CICS betatest, BDAM, univ library
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

Time to retire the phrase 'Military Industrial Complex'

From: Lynn Wheeler <lynn@garlic.com>
Subject: Time to retire the phrase 'Military Industrial Complex'
Date: 26 Jul, 2024
Blog: Facebook
Time to retire the phrase 'Military Industrial Complex'
https://responsiblestatecraft.org/military-industrial-complex-2668809022/

part of Eisenhower's warning about MIC ... MIC was claiming huge "bomber gap" with the soviets justifying something like 30% increase in DOD budget .... one of the things to remember about CIA U2 flights was they showed the claims weren't true.

recent post mentiong "team b"
https://www.garlic.com/~lynn/2024d.html#98 Why Bush Invaded Iraq

Team B "posts"
https://www.garlic.com/~lynn/submisc.html#team.b
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some posts mentioning Eisenhower's warning
https://www.garlic.com/~lynn/2019c.html#18 The Making of the Military-Industrial Complex
https://www.garlic.com/~lynn/2018e.html#78 meanwhile in eastern Asia^WEurope, was tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2016e.html#122 U.S. Defense Contractors Tell Investors Russian Threat Is Great for Business
https://www.garlic.com/~lynn/2016d.html#99 Trust in Government Is Collapsing Around the World
https://www.garlic.com/~lynn/2016c.html#80 Qbasic
https://www.garlic.com/~lynn/2015.html#13 LEO
https://www.garlic.com/~lynn/2014h.html#22 $40 billion missile defense system proves unreliable
https://www.garlic.com/~lynn/2014b.html#54 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014b.html#52 US Army hopes to replace 25% of soldiers with robots by 2040
https://www.garlic.com/~lynn/2013h.html#41 Is newer technology always better? It almost is. Exceptions?
https://www.garlic.com/~lynn/2013g.html#54 What Makes collecting sales taxes Bizarre?

--
virtualization experience starting Jan1968, online at home since Mar1970

Time to retire the phrase 'Military Industrial Complex'

From: Lynn Wheeler <lynn@garlic.com>
Subject: Time to retire the phrase 'Military Industrial Complex'
Date: 26 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024d.html#108 Time to retire the phrase 'Military Industrial Complex'

when Obama was first freezing spending to help reduce enormous deficit he inherited, MIC heavily lobbied to significantly cut DOD Veteran/VA spending ... so it could go into their pockets.

we had neighbor that was mental health professional and said that VA was cutting doctors and instead planning on putting vulnerable veterans on drug maintenance program. VA was already not being funded for the anticipated huge influx in soldiers coming home from the two wars (and MIC wanting to maintain/increase their funding level out of veteran benefits).

some posts reference VA funding and not prepared to handle influx of physical and mental disabilities from the two wars
https://www.garlic.com/~lynn/2018c.html#5 The war isn't over. After military service, veterans still fight to endure
https://www.garlic.com/~lynn/2017f.html#54 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017.html#16 House GOP appallingly votes to conceal cost of Obamacare repeal to taxpayers
https://www.garlic.com/~lynn/2016f.html#20 Why a Single-Payer Health Care System is Inevitable
https://www.garlic.com/~lynn/2016b.html#110 The Koch-Fueled Plot to Destroy the VA
https://www.garlic.com/~lynn/2015d.html#76 Greedy Banks Nailed With $5 BILLION+ Fine For Fraud And Corruption

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3705 & 3725

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3705 & 3725
Date: 27 Jul, 2024
Blog: Facebook
re:
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#54 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#70 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#71 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#72 BM 3705 & 3725

Fall 1986, I was asked to give a talk on the S1 NCP/VTAM at the SNA ARB (architecture review board) in Raleigh. The technical people really loved it (much better than the 3725 processor and the real NCP base code). However as I was leaving, the executive responsible for SNA caught me in the hall and wanted to know who was responsible for me presenting to the ARB.

Later, what the communication group did to tank the S1 NCP/VTAM effort could only be described as truth is stranger than fiction

trivia, as undergraduate in the 60s worked on a clone IBM telecommunication controller (using Interdata computer), archived posts
https://www.garlic.com/~lynn/submain.html#360pcm

old Series/1 NCP/VTAM thread/posts
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#69 System/1 ?
https://www.garlic.com/~lynn/99.html#70 System/1 as NCP (was: System/1 ?)

some recent posts mentioning Series/1 NCP/VTAM effort
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2024.html#83 SNA/VTAM
https://www.garlic.com/~lynn/2024.html#34 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023e.html#89 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023c.html#62 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#60 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023c.html#57 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023b.html#91 360 Announce Stories
https://www.garlic.com/~lynn/2023b.html#62 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#4 IBM 370
https://www.garlic.com/~lynn/2023b.html#3 IBM 370
https://www.garlic.com/~lynn/2022h.html#98 IBM 360
https://www.garlic.com/~lynn/2022h.html#50 SystemView
https://www.garlic.com/~lynn/2022e.html#32 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022c.html#79 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022.html#120 Series/1 VTAM/NCP
https://www.garlic.com/~lynn/2021k.html#115 Peer-Coupled Shared Data Architecture
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021j.html#14 IBM SNA ARB
https://www.garlic.com/~lynn/2021i.html#83 IBM Downturn
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2021c.html#91 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

GNOME bans Manjaro Core Team Member for uttering "Lunduke"

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: GNOME bans Manjaro Core Team Member for uttering "Lunduke"
Newsgroups: comp.os.linux.advocacy, comp.os.linux.misc,
 alt.folklore.computers
Date: Sun, 28 Jul 2024 08:15:41 -1000
Chris Ahlstrom <OFeem1987@teleworm.us> writes:
Ahem A Rivet's Shot wrote this copyrighted missive and expects royalties: Ahhh, good ol' RUNOFF. I used it in the 70's when it's input was all UPPER-CASE text but mixed-case documents could be generated.

These days I have used LaTeX and good ol' vi to generate PDF documents of two or three hundred pages containing graphics. No way in hell I would do that with Microsoft Word or LibreOffice. The former is especially painful. People have used it to write books; the thought makes me shudder.

Of course, you need some kind of viewer that can show images.


MIT CTSS/7094 had (upper/lower case) 2741 terminals.
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF#CTSS
The original RUNOFF type-setting program for CTSS was written by Jerome H. Saltzer circa 1964. Bob Morris and Doug McIlroy translated that from MAD to BCPL.[3] Morris and McIlroy then moved the BCPL version to Multics when the IBM 7094 on which CTSS ran was being shut down.

... snip ...

Some of the CTSS people went to the 5th flr to do Multics. Others went to the IBM science center on the 4th flr and did virtual machines. They originally wanted 360/50 to do hardware mods to add virtual memory, but all the extra 360/50s were going to the FAA ATC program, and so had to settle for a 360/40 ... doing CP40/CMS. Then when 360/67 standard with virtual memory became available, CP40/CMS morphs into CP67/CMS

... also RUNOFF ported to CP67/CMS
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF#Other_versions_and_implementations
The origin of IBM's SCRIPT software began in 1968 when IBM contracted Stuart Madnick of MIT to write a simple document preparation tool[7] for CP/67,[8] which he modelled on MIT's CTSS RUNOFF.[9]

... snip ...

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

Then in 1969, GML was invented at the science center and GML tag processing was added to SCRIPT. A decade later GML morphs into ISO standard SGML and after another decade, morphs into HTML at CERN. First webserver in the US was done on (CERN sister institution) Stanford SLAC on their VM370 (descendant of CP67)
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

primary person (before) inventing GML (in 1969), was hired to push Cambridge's wide-area network (which morphs into the internal corporate network, larger than arpanet/internet from just about the beginning until sometime mid/late 80s)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the project, something I was allowed to do part-time because of my knowledge of the user requirements. My real job was to encourage the staffs of the various scientific centers to make use of the CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

technology also used for the corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET
in 89, merges with CSNET
https://en.wikipedia.org/wiki/CSNET
to form CREN:
https://en.wikipedia.org/wiki/Corporation_for_Research_and_Educational_Networking

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

person responsible for science center wide-area network (following also references the author of original RUNOFF)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to DARPA, where Hendricks described his innovations to the principal scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75, Cerf and Hendricks were the only two delegates from the United States, to attend a workshop on Data Communications at the International Institute for Applied Systems Analysis, 2361 Laxenburg Austria where again, Hendricks spoke publicly about his innovative design which paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

trivia: I took two credit hr intro to fortran/computers class and at the end of the semester, hired to rewrite 1401 MPIO for 360/30. Univ getting 360/67 for tss/360. to replace 709/1401 and temporary got 360/30 (that had 1401 microcode emulation) to replace 1401, pending arrival of 360/67 (Univ shutdown datacenter on weekend, and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard). Within a year of taking intro class, 360/67 showed up and I was hired fulltime responsibility for OS/360 (tss/360 didn't come to production, so ran as 360/65 with os/360) ... and I continued to have my dedicated weekend time. Student fortran ran under second on 709 (tape to tape), but initial over a minute on 360/65. I install HASP and it cuts time in half. I then start revamping stage2 sysgen to place datasets and PDS members to optimize disk seek and multi-track searches, cutting another 2/3rds to 12.9secs; never got better than 709 until I install univ of waterloo WATFOR. My 1st SYSGEN was R9.5MFT, then started redoing stage2 sysgen for R11MFT. MVT shows up with R12 but I didn't do MVT gen until R15/16 (15/16 disk format shows up being able to specify VTOC cyl ... aka place other than cyl0 to reduce avg. arm seek).

CSC comes out to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs), which I mostly got to play with during my weekend dedicated time. First few months I mostly spent rewriting CP67 pathlengths for running os/360 in virtual machine; test os/360 stream 322secs, initially ran 856secs virtually; CP67 CPU 534secs got down to CP67 CPU 113secs. CP67 had 1052 & 2741 support with automatic terminal type identification (controller SAD CCW to switch port scanner terminal type). Univ had some TTY terminals so I added TTY support integrated with automatic terminal type.

I then wanted to have single dial-up number ("hunt group") for all terminals ... but IBM had taken short-cut and hardwired port line-speed ... which kicks off univ. program to build clone controller, building channel interface board for Interdata/3, programmed to emulate IBM controller with inclusion of automatic line speed. Later upgraded to Interdata/4 for channel interface and cluster of Interdata/3s for port interfaces. Four of us get written up responsible for (some part of) IBM clone controller business ... initially sold by Interdata and then by Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

360 clone telecommunication controller
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

43 years ago, Microsoft bought 86-DOS and started its journey to dominate the PC market

From: Lynn Wheeler <lynn@garlic.com>
Subject: 43 years ago, Microsoft bought 86-DOS and started its journey to dominate the PC market
Date: 29 Jul, 2024
Blog: Facebook
43 years ago, Microsoft bought 86-DOS and started its journey to dominate the PC market
https://www.xda-developers.com/on-this-day-43-years-ago-microsoft-bought-86-dos/
The IBM PC needed an operating system, and that's where Microsoft comes in. IBM contracted Microsoft to develop an operating system that would run on the IBM PC, and Microsoft in turn acquired a non-exclusive license from Seattle Computer Products (SCP) to port 86-DOS to the IBM PC. 86-DOS was a great choice because it looked similar to the existing CP/M operating system and apps could easily be ported from one to the other, but it would run on the Intel 8086 processor IBM was going with. This happened at the tail end of 1980, and it was just two weeks before the IBM PC would launch that Microsoft acquired full rights to the 86-DOS operating system, renaming it to MS-DOS, which it licensed to IBM to be sold under the name PC DOS.

... snip ...

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP/67-CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

other trivia: Boca claimed that they weren't doing any software for ACORN (code name for IBM/PC) and so a small IBM group in Silicon Valley formed to do ACORN software (many who had been involved with CP/67-CMS and/or its follow-on VM/370-CMS) ... and every few weeks, there was contact with Boca that decision hadn't changed. Then at some point, Boca changed its mind and silicon valley group was told that if they wanted to do ACORN software, they would have to move to Boca (only one person accepted the offer, didn't last long and returned to silicon valley). Then there was joke that Boca didn't want any internal company competition and it was better to deal with external organization via contract than what went on with internal IBM politics.

science center posts (originated CP/67-CMS)
https://www.garlic.com/~lynn/subtopic.html#545tech

some posts mentioning msdos and Opel
https://www.garlic.com/~lynn/2024c.html#111 Anyone here (on news.eternal-september.org)?
https://www.garlic.com/~lynn/2024b.html#102 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#25 CTSS/7094, Multics, Unix, CP/67
https://www.garlic.com/~lynn/2024.html#4 IBM/PC History
https://www.garlic.com/~lynn/2023g.html#75 The Rise and Fall of the 'IBM Way'. What the tech pioneer can, and can't, teach us
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#27 Another IBM Downturn
https://www.garlic.com/~lynn/2023f.html#100 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023e.html#26 Some IBM/PC History
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023.html#99 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#90 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor

--
virtualization experience starting Jan1968, online at home since Mar1970

... some 3090 and a little 3081

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: ... some 3090 and a little 3081
Date: 29 Jul, 2024
Blog: Facebook
... some 3090 and a little 3081

first part of 70s, IBM had the Future System effort, completely different from 370 and going to completely replace it (internal politics during FS was killing off 370 products, the lack of new 370 during FS is credited with giving clone 370 makers their market foothold, including Amdahl) ... when FS implodes there was mad rush to get new stuff back into the 370 product pipelines including kicking off quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

370/xa was referred to "811" for the architecture/design documents' Nov1978 publication date, much of it was to address MVS short comings (aka head of POK had shortly before managed to convince corporate to kill the VM370 product, shutdown the development group and have all the people transferred to POK for MVS/XA; Endicott did eventually manage to save the VM370 product mission ... for the "mid-range).

trivia: when I joined IBM, one of my hobbies was enhanced production operating system for internal datacenters (including world-wide sales&market support HONE). In the original morph of CP67->VM370, lots of stuff was simplified or dropped (including multiprocessor support). In 1974, I start moving a bunch of stuff to VM370R2, including kernel reorg for multiprocessor support, but not actual multiprocessor support itself.

In 1975, I move my CSC/VM system to VM370R3 and add multiprocessor support, originally for the US consolidated sales&marketing support HONE datacenters up in Palo Alto (the consolidated US systems had been enhanced into a single system image, loosely-coupled, shared DASD operation with load-balancing and fall-over (one of the largest such complexes in the world). The multiprocessor support allowed them to add a 2nd processor to each system (making it the largest in the world, airlines' TPF had similar shared-dasd complexes, but TPF didn't get SMP support for another decade). I had done some hacks in order to get two processor system twice the throughput of single process (at the time IBM documentation was two processor MVS had 1.2-1.5 times the thoughput of a single processor).

With the implosion of FS (and the demise of the VM370 development group) ... I got roped into helping with a 16-processor 370 SMP and we con'ed the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK, it could be decades before the POK favorite son operating system (MVS) would have (effective) 16-processor support (POK doesn't ship a 16-processor system until after the turn of the century) and the head of POK invites some of us to never visit POK again (and tells the 3033 processor engineers, heads down on 3033 and no distractions). Some POK reps were also out bullying internal datacenters (including HONE) that they had to convert from VM370 to MVS. Once 3033 was out the door, the 3033 engineers start on trout/3090.

Initial two processor 3081D had lower (aggregate) MIP rate of the latest Amdahl single processor and IBM fairly quickly responds with 3081K, twice the processor cache size for apporx same (aggregate) MIP rate as Amdahl single processor (and much lower throughput with MVS duplex only 1.2-1.5 of single processor).

some trout/3090 (and 3081) refs (including 3081 VMTOOL/SIE performance, never intended for production)
https://www.garlic.com/~lynn/2011b.html#email810210
https://www.garlic.com/~lynn/2019c.html#email810423
https://www.garlic.com/~lynn/2006j.html#email810630
https://www.garlic.com/~lynn/2003j.html#email831118

After FS implosion, I had also been asked to help Endicott with ECPS microcode for virgil/tully (i.e. 138/148) ... they had 6kbytes avail for ECPS microcode and 370 instructions would translate to appox. same number of bytes ... I would start with analysis to find the 6kbytes of highest executed vm/370 kernel instructions ... initial analysis in this old archived post
https://www.garlic.com/~lynn/94.html#21
top 6k bytes of vm370 instructions accounted for 79.55% of kernel execution and moving to ECPS, ran ten times faster.

While POK managed to get VM370 product killed, they did do the virtual machine VMTOOL for internal MVS/XA development. Early 80, I got permission to give presentations how (138/148) ECPS was implemented at user group meetings, including monthly BAYBUNCH meetings (hosted by Stanford SLAC). After BAYBUNCH meetings we would usually adjourn to local watering hole (most often "O" or "Goose"). The Amdahl people would grill me for additional ECPS information. They said that they were using MACROCODE (370-like instruction set running in microcode mode, originally developed to quickly respond to IBM series of trivial 3033 microcode changes for MVS) to (quickly) implement HYPERVISOR (multiple domain facility, microcode implementing virtual machine VM370-subset) able to run MVS and MVS/XA concurrently. POK was then finding that customers weren't converting to MVS/XA like IBM had planned ... similar to the slow uptake of original MVS:
http://www.mxg.com/thebuttonman/boney.asp

Amdahl was having better success (able to run MVS and MVS/XA concurrently). Trying to respond, POK releases VMTOOL as VM/MA and VM/SF (but had significant performance shortcomings). Then POK was requesting a couple hundred people to upgrade VMTOOL to the VM/370 feature/function/performance level (for VM/XA). Endicott counter was single IBM sysprog in Rochester had implemented full 370/XA support in VM/370 ... but POK won.

IBM wasn't able to respond (with LPAR&PR/SM) to Amdahl HYPERVISOR until nearly the end of the decade (Feb1988)
https://en.wikipedia.org/wiki/IBM_3090#Processor_Resource/Systems_Manager_(PR/SM)
notice in the above URLS ... 3081 was using some warmed-over FS technology
https://en.wikipedia.org/wiki/IBM_3090#Cooling
i.e. originally accounting for the need for 3081 TCMs (& liquid cooling) to pack the huge amount of circuity into smaller volume
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]

... snip ...

Amdahl had won the battle to make ACS, 360-compatible. Folklore is then executives were afraid it would advance the state of the art too fast and IBM would loose control of the market (lists some features that show up more than two decades later with ES/9000)
https://people.computing.clemson.edu/~mark/acs_end.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, multiprocessors posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home