List of Archived Posts

2024 Newsgroup Postings (07/29 - )

2314 Disks
TYMSHARE Dialup
DASD CKD
TYMSHARE Dialup
Private Equity
For Big Companies, Felony Convictions Are a Mere Footnote
2314 Disks
For Big Companies, Felony Convictions Are a Mere Footnote
Ampere Arm Server CPUs To Get 512 Cores, AI Accelerator
Saudi Arabia and 9/11
For Big Companies, Felony Convictions Are a Mere Footnote
Private Equity Giants Invest More Than $200M in Federal Races to Protect Their Lucrative Tax Loophole
360 1052-7 Operator's Console
360 1052-7 Operator's Console
50 years ago, CP/M started the microcomputer revolution
IBM Downfall and Make-over
50 years ago, CP/M started the microcomputer revolution
IBM Downfall and Make-over
IBM Downfall and Make-over
HONE, APL, IBM 5100
TYMSHARE, ADVENTURE/games
360/50 and CP-40
Disk Capacity and Channel Performance
After private equity takes over hospitals, they are less able to care for patients
Public Facebook Mainframe Group
VMNETMAP
VMNETMAP
VMNETMAP
VMNETMAP
VMNETMAP
Disk Capacity and Channel Performance
VMNETMAP
The Irrationality of Markets
IBM 138/148
VMNETMAP
Disk Capacity and Channel Performance
Implicit Versus Explicit "Run" Command
Gene Amdahl
Gene Amdahl
The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008
Instruction Tracing
Netscape
The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008
Netscape
Netscape
Netscape
Netscape
Netscape
PROFS
OSI and XTP

2314 Disks

From: Lynn Wheeler <lynn@garlic.com>
Subject: 2314 Disks
Date: 29 Jul, 2024
Blog: Facebook

IBM Wilshire Showroom
https://www.youtube.com/watch?v=i-jglBy9LOI

when I graduated and join science center, had 360/67 with increasing
banks of 2314, quickly grew to five 8+1 banks and a 5 bank (for 45
drives) ... the CSC FE then painted each bank panel door with a
different color ... to help operator mapping disk mount request
address to a 2314 bank.

one of hobbies after joining IBM was enhanced production operating
systems for internal datacenters and online branch office
sales&marketing suppprt HONE systems was long time customer (1st
cp67/cms then vm370/cms) ... most frequently stopped by HONE 1133
westchester and wilshire blvd (3424?) ... before all US HONE
datacenters were consolidated in Palo Alto (trivia: when facebook 1st
moves into silicon valley it is new bldg built next to the former US
consolidated HONE datacenter). First US consolidated operation was
single-system-image, loosely-coupled, shared DASD with load-balancing
and fall-over across the complex (one of the largest in the world,
some similar airlines TPF operations), then a 2nd processor was added
to each system (making it the largest, TPF didn't get SMP
multiprocessor support for another decade).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, loosely-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Blog: Facebook

Posts mentioning CSC & 45 2314 drives
https://www.garlic.com/~lynn/2021c.html#72 I/O processors, What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2019.html#51 3090/3880 trivia
https://www.garlic.com/~lynn/2013d.html#50 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012n.html#60 The IBM mainframe has been the backbone of most of the world's largest IT organizations for more than 48 years
https://www.garlic.com/~lynn/2011.html#16 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2003b.html#14 Disk drives as commodities. Was Re: Yamhill

--
virtualization experience starting Jan1968, online at home since Mar1970

TYMSHARE Dialup

From: Lynn Wheeler <lynn@garlic.com>
Subject: TYMSHARE Dialup
Date: 30 Jul, 2024
Blog: Facebook

IBM 2741 dialup at home, Mar1970-Jun1977, replaced by CDI Miniterm.

Note Aug1976, TYMSHARE starts offering their VM370/CMS-based online
computer conferencing to (user group) SHARE as VMSHARE ... archives
here:
http://vm.marist.edu/~vmshare

I cut deal w/TYMSHARE to get a monthly tape dump of all VMSHARE (and
later PCSHARE) files for putting up on internal network and systems
... one of biggest problems was lawyers concern that internal
employees would be contaminated by unfiltered customer information.

Much later with M/D acquiring TYMSHARE, I was brought in to review
GNOSIS for the spinoff:

Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167

Ann rose up to become Vice President of the Integrated Systems
Division at Tymshare, from 1976 to 1984, which did online airline
reservations, home banking, and other applications. When Tymshare was
acquired by McDonnell-Douglas in 1984, Ann's position as a female VP
became untenable, and was eased out of the company by being encouraged
to spin out Gnosis, a secure, capabilities-based operating system
developed at Tymshare. Ann founded Key Logic, with funding from Gene
Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl
mainframes. After closing Key Logic, Ann became a consultant, leading
to her cofounding Agorics with members of Ted Nelson's Xanadu project.

... snip ...

Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89

Ann Hardy is a crucial figure in the story of Tymshare and
time-sharing. She began programming in the 1950s, developing software
for the IBM Stretch supercomputer. Frustrated at the lack of
opportunity and pay inequality for women at IBM -- at one point she
discovered she was paid less than half of what the lowest-paid man
reporting to her was paid -- Hardy left to study at the University of
California, Berkeley, and then joined the Lawrence Livermore National
Laboratory in 1962. At the lab, one of her projects involved an early
and surprisingly successful time-sharing operating system.

... snip ...

If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online virtual machine commercial services
https://www.garlic.com/~lynn/submain.html#online

--
virtualization experience starting Jan1968, online at home since Mar1970

DASD CKD

From: Lynn Wheeler <lynn@garlic.com>
Subject: DASD CKD
Date: 30 Jul, 2024
Blog: Facebook

CKD was trade-off with i/o capacity and mainframe memory in mid-60s
... but the mid-70s trade-off started to flip. IBM 3370 FBA in the
late 70s and then all disks started to migrate to fixed-block (can be
seen in 3380 records/track formulas, record size had to be rounded up
to fixed "cell size"). Currently there haven't been any CKD DASD made
for decades, all being simulated on industry standard fixed-block
disks).

took two credit hr intro to fortran/computers, end of the semester
hired to rewrite 1401 MPIO for 360/30. Univ getting 360/67 for
tss/360. to replace 709/1401 and temporary got 360/30 (that had 1401
microcode emulation) to replace 1401, pending arrival of 360/67 (Univ
shutdown datacenter on weekend, and I would have it dedicated,
although 48hrs w/o sleep made Monday classes hard). Within a year of
taking intro class, 360/67 showed up and I was hired fulltime
responsibility for OS/360 (tss/360 didn't come to production, so ran
as 360/65 with os/360) ... and I continued to have my dedicated
weekend time. Student fortran ran under second on 709 (tape to tape),
but initial over a minute on 360/65. I install HASP and it cuts time
in half. I then start revamping stage2 sysgen to place datasets and
PDS members to optimize disk seek and multi-track searches, cutting
another 2/3rds to 12.9secs; never got better than 709 until I install
univ of waterloo WATFOR.

My 1st SYSGEN was R9.5MFT, then started redoing stage2 sysgen for
R11MFT. MVT shows up with R12 but I didn't do MVT gen until R15/16
(15/16 disk format shows up being able to specify VTOC cyl ... aka
place other than cyl0 to reduce avg. arm seek).

Bob Bemer history page (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20180402200149/http://www.bobbemer.com/HISTORY.HTM

360s were originally to be ASCII machines ... but the ASCII unit
record gear wasn't ready ... so had to use old tab BCD gear (and
EBCDIC) ... biggest computer goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM

Learson named in the "biggest computer goof ever" ... then he is CEO
and tried (and failed) to block the bureaucrats, careerists and MBAs
from destroying Watson culture and legacy ... then 20yrs later, IBM
has one of the largest losses in the history of US companies and was
being reorged into the 13 "baby blues" in preparation for breaking up
the company
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

DASD CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer in bldg 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning ASCII and/or Bob Bemer
https://www.garlic.com/~lynn/2024d.html#107 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#105 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#33 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#14 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#113 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#81 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#59 Vintage HSDT
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#102 EBCDIC Card Punch Format
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2024.html#26 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL

--
virtualization experience starting Jan1968, online at home since Mar1970

TYMSHARE Dialup

From: Lynn Wheeler <lynn@garlic.com>
Subject: TYMSHARE Dialup
Date: 30 Jul, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#1 TYMSHARE Dialup

CSC comes out to install CP67/CMS (3rd after CSC itself and MIT
Lincoln Labs, precursor to VM370), which I mostly got to play with
during my weekend dedicated time. First few months I mostly spent
rewriting CP67 pathlengths for running os/360 in virtual machine; test
os/360 stream 322secs, initially ran 856secs virtually; CP67 CPU
534secs got down to CP67 CPU 113secs. CP67 had 1052 & 2741 support
with automatic terminal type identification (controller SAD CCW to
switch port scanner terminal type). Univ had some TTY terminals so I
added TTY support integrated with automatic terminal type.

I then wanted to have single dial-up number ("hunt group") for all
terminals ... but IBM had taken short-cut and hardwired port
line-speed ... which kicks off univ. program to build clone
controller, building channel interface board for Interdata/3,
programmed to emulate IBM controller with inclusion of automatic line
speed. Later upgraded to Interdata/4 for channel interface and cluster
of Interdata/3s for port interfaces. Four of us get written up
responsible for (some part of) IBM clone controller business
... initially sold by Interdata and then by Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division

Turn of century, tour of datacenter and a descendant of the Interdata
telecommunication controller handling majority of all credit card
swipe dial-up terminals east of the mississippi.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

--
virtualization experience starting Jan1968, online at home since Mar1970

Private Equity

From: Lynn Wheeler <lynn@garlic.com>
Subject: Private Equity
Date: 31 Jul, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024d.html#106 Private Equity

When private equity buys a hospital, assets shrink, new research
finds. The study comes as U.S. regulators investigate the industry's
profit-taking and its effect on patient care.
https://archive.ph/ClHZ5

Private Equity Professionals Are 'Fighting Fires' in Their Portfolios,
Slowing Down the Recovery. At the same time, "the interest rate spike
has raised the stakes of holding an asset longer," says Bain & Co.
https://www.institutionalinvestor.com/article/2dkcwhdzmq3njso767d34/portfolio/private-equity-professionals-are-fighting-fires-in-their-portfolios-slowing-down-the-recovery

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

For Big Companies, Felony Convictions Are a Mere Footnote

From: Lynn Wheeler <lynn@garlic.com>
Subject: For Big Companies, Felony Convictions Are a Mere Footnote
Date: 31 Jul, 2024
Blog: Facebook

For Big Companies, Felony Convictions Are a Mere Footnote. Boeing
guilty plea highlights how corporate convictions rarely have
consequences that threaten the business
https://archive.ph/if7H0

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some posts mentioning Boeing (& fraud):
https://www.garlic.com/~lynn/2019d.html#42 Defense contractors aren't securing sensitive information, watchdog finds
https://www.garlic.com/~lynn/2018d.html#37 Imagining a Cyber Surprise: How Might China Use Stolen OPM Records to Target Trust?
https://www.garlic.com/~lynn/2018c.html#26 DoD watchdog: Air Force failed to effectively manage F-22 modernization
https://www.garlic.com/~lynn/2017h.html#55 Pareto efficiency
https://www.garlic.com/~lynn/2017h.html#54 Pareto efficiency
https://www.garlic.com/~lynn/2015f.html#42 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd
https://www.garlic.com/~lynn/2012g.html#3 Quitting Top IBM Salespeople Say They Are Leaving In Droves
https://www.garlic.com/~lynn/2011f.html#88 Court OKs Firing of Boeing Computer-Security Whistleblowers
https://www.garlic.com/~lynn/2010f.html#75 Is Security a Curse for the Cloud Computing Industry?
https://www.garlic.com/~lynn/2007c.html#18 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007.html#5 Securing financial transactions a high priority for 2007

--
virtualization experience starting Jan1968, online at home since Mar1970

2314 Disks

From: Lynn Wheeler <lynn@garlic.com>
Subject: 2314 Disks
Date: 31 Jul, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html0 2314 Disks

trivia: ACP/TPF got a 3830 symbolic lock RPQ for loosely-coupled
operation (sort of like the later DEC VAXCluster implementation) much
faster than reserve/release protocol ... but was limited to four
system operation, disk division discontinued since it conflicted with
string switch requiring two 3830s). HONE did an interesting HACK that
simulated processor compare&swap semantics instruction, that worked
across string switch ... so extended to 8 system operation (and with
SMP was 16 processor).

archived email with ACP/TPF 3830 disk controller lock RPQ details
... only serializes I/O for channels connected to same 3830
controller.
https://www.garlic.com/~lynn/2008i.html#email800325
in this post which has a little detail about HONE I/O that simulates
the processor compare-and-swap instruction semantics (works across
string switch and multiple disk controllers)
http://www.garliic.com/~lynn/2008i.html#39

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor (and some compare-and-swap)
https://www.garlic.com/~lynn/subtopic.html#smp
other posts getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk

recent posts mentionine HONE, loosely-coupled, single-system-image
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#92 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#112 Multithreading
https://www.garlic.com/~lynn/2024c.html#90 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#26 HA/CMP
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#88 IBM 360
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#72 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#30 Vintage IBM OS/VU
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#41 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#75 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#91 IBM 4341
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#2 360/91
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#61 Datacenter Vulnerability
https://www.garlic.com/~lynn/2022f.html#59 The Man That Helped Change IBM
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#30 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#62 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022b.html#8 Porting APL to CP67/CMS
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2022.html#81 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021j.html#108 168 Loosely-Coupled Configuration
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021f.html#30 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#80 AT&T Long-lines
https://www.garlic.com/~lynn/2021b.html#15 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#86 IBM Auditors and Games
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System

--
virtualization experience starting Jan1968, online at home since Mar1970

For Big Companies, Felony Convictions Are a Mere Footnote

From: Lynn Wheeler <lynn@garlic.com>
Subject: For Big Companies, Felony Convictions Are a Mere Footnote
Date: 31 Jul, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#5 For Big Companies, Felony Convictions Are a Mere Footnote

The rest is

The accounting firm Arthur Andersen collapsed in 2002 after
prosecutors indicted the company for shredding evidence related to its
audits of failed energy conglomerate Enron. For years after Andersen's
demise, prosecutors held back from indicting major corporations,
fearing they would kill the firm in the process.

... snip ...

The Sarbanes-Oxley joke was that congress felt so badly about the
Anderson collapse that they really increased the audit requirements
for public companies. The rhetoric on flr of congress were claims that
SOX would prevent future ENRONs and guarantee executives &
auditors did jail time ... however it required SEC to do
something. GAO did analysis of public company fraudulent financial
filings showing that it even increased after SOX went into effect (and
nobody doing jail time).
http://www.gao.gov/products/GAO-03-138
http://www.gao.gov/products/GAO-06-678
http://www.gao.gov/products/GAO-06-1053R

The other observation was possibly the only part of SOX that was going
to make some difference might be the informants/whistleblowers
(folklore was one of congressional members involved in SOX had been
former FBI involved in taking down organized crime and supposedly what
made if possible were informants/whistleblowers).

Something similar showed up with the economic mess, financial houses
2001-2008 did over $27T in securitizing mortgages/loans; aka paying
for triple-A rating (when rating agencies knew they weren't
worth triple-A, from Oct2008 congressional hearings) and
selling into the bond market. YE2008, just the four
largest too-big-to-fail were still carrying $5.2T in
offbook toxic CDOs.

Then found some of the too-big-to-fail were money laundering
for terrorists and drug cartels (various stories it enabled drug
cartels to buy military grade equipment largely responsible for
violence on both sides of the border). There would be repeated
"deferred prosecution" (promising never to do again, each time)
... supposedly if they repeated they would be prosecuting (but
apparent previous violations were consistently ignored). Gave rise to
too-big-to-prosecute and too-big-to-jail ... in addition to
too-big-to-fail.
https://en.wikipedia.org/wiki/Deferred_prosecution

trivia: 1999: I was asked to try and help block (we failed) the coming
economic mess; 2004: I was invited to annual conference of EU CEOs and
heads of financial exchanges, that year's theme was EU companies, that
dealt with US companies, were being forced into performing SOX audits
(aka I was there to discuss effectiveness of SOX).

ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
whistleblower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower
Fraudulent Financial Filing posts
https://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
too-big-to-fail (too-big-to-prosecute, too-big-to-jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
(offbook) toxic CDOs
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

some posts mentioning deferred prosecution
https://www.garlic.com/~lynn/2024d.html#59 Too-Big-To-Fail Money Laundering
https://www.garlic.com/~lynn/2024.html#58 Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again
https://www.garlic.com/~lynn/2024.html#19 Huge Number of Migrants Highlights Border Crisis
https://www.garlic.com/~lynn/2022h.html#89 As US-style corporate leniency deals for bribery and corruption go global, repeat offenders are on the rise
https://www.garlic.com/~lynn/2021k.html#73 Wall Street Has Deployed a Dirty Tricks Playbook Against Whistleblowers for Decades, Now the Secrets Are Spilling Out
https://www.garlic.com/~lynn/2018e.html#111 Pigs Want To Feed at the Trough Again: Bernanke, Geithner and Paulson Use Crisis Anniversary to Ask for More Bailout Powers
https://www.garlic.com/~lynn/2018d.html#60 Dirty Money, Shiny Architecture
https://www.garlic.com/~lynn/2017h.html#56 Feds WIMP
https://www.garlic.com/~lynn/2017b.html#39 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017b.html#13 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#45 Western Union Admits Anti-Money Laundering and Consumer Fraud Violations, Forfeits $586 Million in Settlement with Justice Department and Federal Trade Commission
https://www.garlic.com/~lynn/2016e.html#109 Why Aren't Any Bankers in Prison for Causing the Financial Crisis?
https://www.garlic.com/~lynn/2016c.html#99 Why Is the Obama Administration Trying to Keep 11,000 Documents Sealed?
https://www.garlic.com/~lynn/2016c.html#41 Qbasic
https://www.garlic.com/~lynn/2016c.html#29 Qbasic
https://www.garlic.com/~lynn/2016b.html#73 Qbasic
https://www.garlic.com/~lynn/2016b.html#0 Thanks Obama
https://www.garlic.com/~lynn/2016.html#36 I Feel Old
https://www.garlic.com/~lynn/2016.html#10 25 Years: How the Web began
https://www.garlic.com/~lynn/2015h.html#65 Economic Mess
https://www.garlic.com/~lynn/2015h.html#47 rationality
https://www.garlic.com/~lynn/2015h.html#44 rationality
https://www.garlic.com/~lynn/2015h.html#31 Talk of Criminally Prosecuting Corporations Up, Actual Prosecutions Down
https://www.garlic.com/~lynn/2015f.html#61 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#57 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#37 LIBOR: History's Largest Financial Crime that the WSJ and NYT Would Like You to Forget
https://www.garlic.com/~lynn/2015f.html#36 Eric Holder, Wall Street Double Agent, Comes in From the Cold
https://www.garlic.com/~lynn/2015e.html#47 Do we REALLY NEED all this regulatory oversight?
https://www.garlic.com/~lynn/2015e.html#44 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#23 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015d.html#80 Greedy Banks Nailed With $5 BILLION+ Fine For Fraud And Corruption
https://www.garlic.com/~lynn/2014i.html#10 Instead of focusing on big fines, law enforcement should seek long prison terms for the responsible executives

--
virtualization experience starting Jan1968, online at home since Mar1970

Ampere Arm Server CPUs To Get 512 Cores, AI Accelerator

From: Lynn Wheeler <lynn@garlic.com>
Subject: Ampere Arm Server CPUs To Get 512 Cores, AI Accelerator
Date: 31 Jul, 2024
Blog: Facebook

AmpereOne Aurora In Development With Up To 512 Cores, AmpereOne Prices
Published
https://www.phoronix.com/review/ampereone-skus-pricing
Ampere Arm Server CPUs To Get 512 Cores, AI Accelerator
https://www.nextplatform.com/2024/07/31/ampere-arm-server-cpus-to-get-512-cores-ai-accelerator/

When it comes to server CPUs, the "cloud titans" account for more than
half the server revenues and more than half the shipments, and with
server GPUs, which are by far the dominant AI accelerators, these
companies probably account for 65 percent or maybe even 70 percent or
75 percent of revenues and shipments.

... snip ...

There was comparison of IBM max configured z196, 80processors, 50BIPS,
$30M ($600,000/BIPS) and IBM E5-2600 server blades, 16processors,
500BIPS, base list price $1815 ($3.63/BIPS). Note BIPS benchmark is
number of iterations of program compared to the reference platform
(not actual count of instructions). At the time, large cloud
operations claimed that they were assembling their own server blades
for 1/3rd brand named servers ($605, $1.21/BIPS and 500BIPS, ten times
BIPS of max. configured z196 at 1/500000 the price/BIPS). Then there
were articles that the server chip vendors were shipping at least half
their product directly to large cloud operations (that assemble their
own servers). Shortly later, IBM sells-off its server product line.

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

some posts mentioning IBM z196/e5-2600
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2021j.html#56 IBM and Cloud Computing
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#67 Is end of mainframe near ?

--
virtualization experience starting Jan1968, online at home since Mar1970

Saudi Arabia and 9/11

From: Lynn Wheeler <lynn@garlic.com>
Subject: Saudi Arabia and 9/11
Date: 31 Jul, 2024
Blog: Facebook

Saudi Arabia and 9/11

After 9/11, victims were prohibited from suing Saudi Arabia for
responsibility, that wasn't lifted until 2013, some recent progress in
holding Saudi Arabia accountable

New Claims of Saudi Role in 9/11 Bring Victims' Families Back to Court
in Lawsuit Against Riyadh
https://www.nysun.com/article/new-claims-of-saudi-role-in-9-11-bring-victims-families-back-to-court-in-lawsuit-against-riyadh
Lawyers for Saudi Arabia seek dismissal of claims it supported the
Sept. 11 hijackers
https://abcnews.go.com/US/wireStory/lawyers-saudi-arabia-seek-dismissal-claims-supported-sept-112458690
September 11th families suing Saudi Arabia back in federal court in
Lower Manhattan, New York City
https://abc7ny.com/post/september-11th-families-suing-saudi-arabia-back-federal-court-lower-manhattan-new-york-city/15126848/
Video: 'Wow, shocking': '9/11 Justice' president reacts to report on
possible Saudi involvement in 9/11
https://www.cnn.com/2024/07/31/us/video/saudi-arabia-9-11-report-eagleson-lead-digvid
9/11 defendants reach plea deal with Defense Department in Saudi
Arabia lawsuit
https://www.fox5ny.com/news/9-11-justice-families-saudi-arabia-lawsuit-hearing-attacks
9/11 families furious over plea deal for terror mastermind on same day
Saudi lawsuit before judge
https://www.bostonherald.com/2024/07/31/9-11-families-furious-over-plea-deal-for-terror-mastermind-on-same-day-saudi-lawsuit-before-judge/
Judge hears evidence against Saudi Arabia in 9/11 families lawsuit
https://www.newsnationnow.com/world/9-11-families-news-conference-saudi-lawsuit-hearing/
9/11 families furious over plea deal for terror mastermind on same day
Saudi lawsuit goes before judge | Nation World
https://www.rv-times.com/nation_world/9-11-families-furious-over-plea-deal-for-terror-mastermind-on-same-day-saudi-lawsuit/article_762df686-9b9e-58df-9140-30268848e252.html

Latest news some of the recent 9/11 Saudi Arabia material wasn't
released by US gov. but obtained from the British gov.

U.S. Signals It Will Release Some Still-Secret Files on Saudi Arabia
and 9/11
https://www.nytimes.com/2021/08/09/us/politics/sept-11-saudi-arabia-biden.html
Democratic senators increase pressure to declassify 9/11 documents
related to Saudi role in attacks
https://thehill.com/policy/national-security/566547-democratic-senators-increase-pressure-to-declassify-9-11-documents/

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds

some past posts mentioning Saudi Arabia and 9/11
https://www.garlic.com/~lynn/2020.html#22 The Saudi Connection: Inside the 9/11 Case That Divided the F.B.I
https://www.garlic.com/~lynn/2019e.html#143 "Undeniable Evidence": Explosive Classified Docs Reveal Afghan War Mass Deception
https://www.garlic.com/~lynn/2019e.html#85 Just and Unjust Wars
https://www.garlic.com/~lynn/2019e.html#70 Since 2001 We Have Spent $32 Million Per Hour on War
https://www.garlic.com/~lynn/2019e.html#67 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019d.html#99 Trump claims he's the messiah. Maybe he should quit while he's ahead
https://www.garlic.com/~lynn/2019d.html#79 Bretton Woods Institutions: Enforcers, Not Saviours?
https://www.garlic.com/~lynn/2019d.html#54 Global Warming and U.S. National Security Diplomacy
https://www.garlic.com/~lynn/2019d.html#7 You paid taxes. These corporations didn't
https://www.garlic.com/~lynn/2019b.html#56 U.S. Has Spent Six Trillion Dollars on Wars That Killed Half a Million People Since 9/11, Report Says
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#42 Army Special Operations Forces Unconventional Warfare
https://www.garlic.com/~lynn/2018b.html#65 Doubts about the HR departments that require knowledge of technology that does not exist
https://www.garlic.com/~lynn/2016c.html#93 Qbasic
https://www.garlic.com/~lynn/2015g.html#13 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015g.html#12 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015d.html#54 The Jeb Bush Adviser Who Should Scare You
https://www.garlic.com/~lynn/2015.html#72 George W. Bush: Still the worst; A new study ranks Bush near the very bottom in history
https://www.garlic.com/~lynn/2014d.html#89 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014d.html#11 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014d.html#4 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#103 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013j.html#30 What Makes a Tax System Bizarre?

--
virtualization experience starting Jan1968, online at home since Mar1970

For Big Companies, Felony Convictions Are a Mere Footnote

From: Lynn Wheeler <lynn@garlic.com>
Subject: For Big Companies, Felony Convictions Are a Mere Footnote
Date: 31 Jul, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#5 For Big Companies, Felony Convictions Are a Mere Footnote
https://www.garlic.com/~lynn/2024e.html#7 For Big Companies, Felony Convictions Are a Mere Footnote

The Madoff congressional hearings had the person that had tried
(unsuccessfully) for a decade to get SEC to do something about Madoff
(SEC's hands were finally forced when Madoff turned himself in, story
is that he had defrauded some unsavory characters and Madoff was
looking for gov. protection). In any case, part of the hearing
testimony was that informants turn up 13 times more fraud than audits
(while SEC had a 1-800 number to complain about audits, it didn't have
a 1-800 "tip" line)

Madoff posts
https://www.garlic.com/~lynn/submisc.html#madoff
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
whistleblower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower

--
virtualization experience starting Jan1968, online at home since Mar1970

Private Equity Giants Invest More Than $200M in Federal Races to Protect Their Lucrative Tax Loophole

From: Lynn Wheeler <lynn@garlic.com>
Subject: Private Equity Giants Invest More Than $200M in Federal Races to Protect Their Lucrative Tax Loophole
Date: 02 Aug, 2024
Blog: Facebook

Private Equity Giants Invest More Than $200M in Federal Races to
Protect Their Lucrative Tax Loophole
https://www.exposedbycmd.org/2024/07/31/private-equity-giants-invest-more-than-200m-in-federal-races-to-protect-their-lucrative-tax-loophole/

The 11 private equity leaders have contributed more than $223 million
to congressional and presidential candidates and the super PACs that
support them, accounting for almost 20% of all money contributed by
thousands of companies in the finance sector, according to Open
Secrets data. During the 2016 election cycle, the Center for Media and
Democracy (CMD) reported that the 147 private equity firms it analyzed
contributed $92 million to candidates and super PACs.

... snip ...

... trivia, the industry had gotten such a bad reputation during the
"S&L Crisis" that they changed the name to private equity and "junk
bonds" became "high-yield bonds". There was business TV news show
where the interviewer repeatedly said "junk bonds" and the person
being interviewed kept saying "high-yield bonds"

private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis

--
virtualization experience starting Jan1968, online at home since Mar1970

360 1052-7 Operator's Console

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 1052-7 Operator's Console
Date: 02 Aug, 2024
Blog: Facebook

I took two credit hr intro to fortran/computers and end of semester
was hired to rewrite 1401 MPIO for 360/30. Univ was getting 360/67
(for tss/360) to replace 709/1401 and temporarily got 360/30
(replacing 1401) pending 360/67. The univ. shutdown datacenter over
the weekend and I had the whole place dedicated, although 48hrs w/o
sleep made monday classes hard. They gave me a bunch of hardware and
software manuals and I got to design my own monitor, device drivers,
interrupt handlers, error recovery, storage management, etc and within
a few weeks had 2000 card assembler program. Within year of taking
intro class, the 360/67 arrived and I was hired fulltime responsible
for os/360 (tss/360 never came to production) and I continued to have
my 48hr weekend window. One weekend I had been at it for some 30+hrs
and the 1052-7 console (same as 360/30) stopped typing and machine
would just ring the bell. I spent 30-40mins trying everything I could
think off before I hit the 1052-7 with my fist and the paper drop to
the floor. It turns out that the end of the (fan-fold) paper had
passed the paper sensing finger (resulting in 1052-7 unit check with
intervention required) but there was enough friction to keep the paper
in position and not apparent (until console jostled with my fist).

archived posts mentioning 1052-7 and end of paper
https://www.garlic.com/~lynn/2023f.html#108 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2022d.html#27 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#5 360 IPL
https://www.garlic.com/~lynn/2017h.html#5 IBM System/360
https://www.garlic.com/~lynn/2017.html#38 Paper tape (was Re: Hidden Figures)
https://www.garlic.com/~lynn/2010n.html#43 Paper tape
https://www.garlic.com/~lynn/2006n.html#1 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006k.html#27 PDP-1
https://www.garlic.com/~lynn/2006f.html#23 Old PCs--environmental hazard
https://www.garlic.com/~lynn/2005c.html#12 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2002j.html#16 Ever inflicted revenge on hardware ?
https://www.garlic.com/~lynn/2001.html#3 First video terminal?

--
virtualization experience starting Jan1968, online at home since Mar1970

360 1052-7 Operator's Console

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 1052-7 Operator's Console
Date: 02 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#12 360 1052-7 Operator's Console

... other trivia: student fortran jobs ran less than second on 709
(tape->tape) .... initially with os/360 on 360/67 ran over a
minute. I install HASP and cuts time in half. I then start redoing
stage2 sysgen to place datasets and PDS members to optimize arm seek
and multi-track search, cutting another 2/3rds to 12.9secs; never got
better than 709 until I install univ. waterloo watfor.

before I graduate was hired fulltime into small group in Boeing CFO
office to help with the formation of Boeing Computer Services
(consolidate all dataprocessing into an independent business unit). I
thot Renton datacenter largest in the world, couple hundred million in
360s, 360/65s arriving faster than they could be installed, boxes
constantly staged in hallways around machine room. Lots of politics
between Renton director and CFO, who only had a 360/30 up at Boeing
field for payroll (although they enlarge the room and install 360/67
for me to play with when I'm not doing other stuff).

recent posts mentioning Watfor and Boeing CFO/Renton
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally

--
virtualization experience starting Jan1968, online at home since Mar1970

50 years ago, CP/M started the microcomputer revolution

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 50 years ago, CP/M started the microcomputer revolution
Date: 03 Aug, 2024
Blog: Facebook

50 years ago, CP/M started the microcomputer revolution
https://www.theregister.com/AMP/2024/08/02/cpm_50th_anniversary/

some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to the 5th flr for Multics
https://en.wikipedia.org/wiki/Multics

Others went to the IBM science center on the 4th flr and did virtual
machines.  https://en.wikipedia.org/wiki/Cambridge_Scientific_Center

They originally wanted 360/50 to do hardware mods to add virtual
memory, but all the extra 360/50s were going to the FAA ATC program,
and so had to settle for a 360/40 ... doing CP40/CMS (control
program/40, cambridge monitor system)
https://en.wikipedia.org/wiki/IBM_CP-40

CMS would run on 360/40 real machine (pending CP40 virtual machine
being operational). CMS started with single letter for filesystem
("P", "S", etc) which were mapped mapped to "symbolic name" that
started out mapped to physical (360/40) 2311 disk, then later to
minidisks

Then when 360/67 standard with virtual memory became available,
CP40/CMS morphs into CP67/CMS (later for VM370/CMS, virtual machine
370 and conversational monitor system). CMS Program Logic Manual
(CP67/CMS Version 3.1)
https://bitsavers.org/pdf/ibm/360/cp67/GY20-0591-1_CMS_PLM_Oct1971.pdf

(going back to CMS implementation for real 360/40) the system API
convention: pg4:

Symbolic Name: CON1, DSK1, DSK2, DSK3, DSK4, DSK5, DSK6, PRN1, RDR1,
PCH1, TAP1, TAP2

... snip ...

trivia: I took two credit hr intro to fortran/computers class and at
the end of the semester, hired to rewrite 1401 MPIO for 360/30. Univ
getting 360/67 for tss/360. to replace 709/1401 and temporary got
360/30 (that had 1401 microcode emulation) to replace 1401, pending
arrival of 360/67 (Univ shutdown datacenter on weekend, and I would
have it dedicated, although 48hrs w/o sleep made Monday classes
hard). I was given a bunch of hardware & software manuals and got to
design my own monitor, device drivers, interrupt handlers, error
recovery, storage management, etc ... and within a few weeks had a
2000 card assembler program

Within a year of taking intro class, 360/67 showed up and I was hired
fulltime responsibility for OS/360 (tss/360 didn't come to production,
so ran as 360/65 with os/360) ... and I continued to have my dedicated
weekend time. Student fortran ran under second on 709 (tape to tape),
but initial over a minute on 360/65. I install HASP and it cuts time
in half. I then start revamping stage2 sysgen to place datasets and
PDS members to optimize disk seek and multi-track searches, cutting
another 2/3rds to 12.9secs; never got better than 709 until I install
univ of waterloo WATFOR. My 1st SYSGEN was R9.5MFT, then started
redoing stage2 sysgen for R11MFT. MVT shows up with R12 but I didn't
do MVT gen until R15/16 (15/16 disk format shows up being able to
specify VTOC cyl ... aka place other than cyl0 to reduce avg. arm
seek).

along the way, CSC comes out to install CP67/CMS (3rd after CSC itself
and MIT Lincoln Labs, precursor to VM370), which I mostly got to play
with during my weekend dedicated time. First few months I mostly spent
rewriting CP67 pathlengths for running os/360 in virtual machine; test
os/360 stream 322secs, initially ran 856secs virtually; CP67 CPU
534secs got down to CP67 CPU 113secs. CP67 had 1052 & 2741 support
with automatic terminal type identification (controller SAD CCW to
switch port scanner terminal type). Univ had some TTY terminals so I
added TTY support integrated with automatic terminal type.

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

aka: CP/M ... control program/microcomputer

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html

According to the New York Times, it was Opel who met with Bill Gates,
CEO of the then-small software firm Microsoft, to discuss the
possibility of using Microsoft PC-DOS OS for IBM's
about-to-be-released PC. Opel set up the meeting at the request of
Gates' mother, Mary Maxwell Gates. The two had both served on the
National United Way's executive committee.

... snip ...

other trivia: Boca claimed that they weren't doing any software for
ACORN (code name for IBM/PC) and so a small IBM group in Silicon
Valley formed to do ACORN software (many who had been involved with
CP/67-CMS and/or its follow-on VM/370-CMS) ... and every few weeks,
there was contact with Boca that decision hadn't changed. Then at some
point, Boca changed its mind and silicon valley group was told that if
they wanted to do ACORN software, they would have to move to Boca
(only one person accepted the offer, didn't last long and returned to
silicon valley). Then there was joke that Boca didn't want any
internal company competition and it was better to deal with external
organization via contract than what went on with internal IBM
politics.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent posts mentioning early work on CP67 pathlengths for
running os/360
https://www.garlic.com/~lynn/2024e.html#3 TYMSHARE Dialup
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#90 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2024.html#94 MVS SRM
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall and Make-over

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall and Make-over
Date: 03 Aug, 2024
Blog: Facebook

re: SNA/TCPIP; 80s, the communication group was fighting off
client/server and distributed computing, trying to preserve their dumb
terminal paradigm ... late 80s, a senior disk engineer got a talk
scheduled at an annual, world-wide, internal, communication group
conference, supposedly on 3174 performance ... but opens the talk with
statement that the communication group was going to be responsible for
the demise of the disk division. The disk division was seeing a drop
in disk sales with data fleeing datacenters to more distributed
computing friendly platforms. They had come up with a number of
solutions but were constantly vetoed by the communication group (with
their corporate strategic ownership of everything that crossed
datacenters walls) ... communication group datacenter stranglehold
wasn't just disks and a couple years later IBM has one of the largest
losses in the history of US companies.

As partial work-around, senior disk division executive was investing
in distributed computing startups that would use IBM disks ... and
would periodically ask us to drop by his investments to see if we
could provide any help.

Learson was CEO and tried (and failed) to block the bureaucrats,
careerists and MBAs from destroying Watson culture and legacy ... then
20yrs later IBM (w/one of the largest losses in the history of US
companies) was being reorged into the 13 "baby blues" in preparation
for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup of the company. Before we get
started, the board brings in the former president of AMEX that
(somewhat) reverses the breakup (although it wasn't long before the
disk division is gone).

some more Learson detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
communication group trying to preserve dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal

--
virtualization experience starting Jan1968, online at home since Mar1970

50 years ago, CP/M started the microcomputer revolution

From: Lynn Wheeler <lynn@garlic.com>
Subject: 50 years ago, CP/M started the microcomputer revolution
Date: 04 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution

some personal computing history
https://arstechnica.com/features/2005/12/total-share/
https://arstechnica.com/features/2005/12/total-share/2/
https://arstechnica.com/features/2005/12/total-share/3/
https://arstechnica.com/features/2005/12/total-share/4/
https://arstechnica.com/features/2005/12/total-share/5
https://arstechnica.com/features/2005/12/total-share/6/
https://arstechnica.com/features/2005/12/total-share/7/
https://arstechnica.com/features/2005/12/total-share/8/
https://arstechnica.com/features/2005/12/total-share/9/
https://arstechnica.com/features/2005/12/total-share/10/

old archived post with decade of vax sales (including microvax),
sliced and diced by year, model, us/non-us
https://www.garlic.com/~lynn/2002f.html#0

IBM 4300s sold into the same mid-range market as VAX and in about the
same numbers (excluding microvax) in the small/single unit orders, big
difference large corporations with orders for hundreds of vm/4300s for
placing out in departmental areas ... sort of the leading edge of
coming the distributed computing tsunami.

other trivia: In jan1979, I was con'ed into doing (old CDC6600
fortran) benchmark on early engineering 4341 for national lab that was
looking at getting 70 for compute farm, sort of the leading edge of
the coming cluster supercomputing tsunami. A small vm/4341 cluster was
much less expensive than a 3033, higher throughput, smaller footprint,
less power&cooling, folklore that POK felt so threatened that they got
corporate to cut Endicoot allocation of critical 4341 manufacturing
component in half.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall and Make-over

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall and Make-over
Date: 04 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#15 IBM Downfall and Make-over

other background, AMEX and KKR were in competition for (private
equity) take-over off RJR and KKR wins. KKR then runs into trouble
with RJR and hires away the AMEX president to help.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
Then IBM board hires former AMEX president to help with IBM make-over
... who uses some of the same tactics used at RJR (ref gone 404, but
lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml

The former AMEX president then leaves IBM to head up another major
private-equity company
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/

"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a
Washington-based global private equity firm whose 2006 revenues of $87
billion were just a few billion below ibm's. Carlyle has boasted
George H.W. Bush, George W. Bush, and former Secretary of State James
Baker III on its employee roster."

... snip ...

... around turn of the century, private-equity were buying up beltway
bandits and gov. contractors, hiring prominent politicians to lobby
congress to outsource gov. to their companies, side-stepping laws
blocking companies from using money from gov. contracts to lobby
congress. the bought companies also were having their funding cut to
the bone, maximizing revenue for the private equity owners; one poster
child was company doing outsourced high security clearances but were
found to just doing paper work, but not actually doing background
checks.

former amex president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
pension plan posts
https://www.garlic.com/~lynn/submisc.html#pension
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
success of failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Downfall and Make-over

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Downfall and Make-over
Date: 04 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#15 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024e.html#17 IBM Downfall and Make-over

23jun1969 unbundling announcement, starting to charge for
(application) software (made the case that kernel software should
still be free), SE services, hardware maint.

SE training had included trainee type operation, part of large group
at customer ship ... however, they couldn't figure out how *NOT* to
charge for trainee SEs at customer location ... thus was born HONE,
branch-office online access to CP67 datacenters, practicing with guest
operating systems in virtual machines. The science center had also
ported APL\360 to CMS for CMS\APL (fixes for large demand page virtual
memory workspaces and supporting system APIs for things like file I/O,
enabling real-world applications). HONE then started offering
CMS\APL-based sales&marketing support applications, which came to
dominate all HONE activity (with guest operating system use dwindling
away). One of my hobbies after joining IBM was enhanced operating
systems for internal datacenters and HONE was long-time customer.

Early 70s, IBM had the Future System effort, totally different from
360/370 and was going to completely replace 370; dearth of new 370
during FS is credited with giving the clone 370 makers (including
Amdahl) their market foothold (all during FS, I continued to work on
360/370 even periodically ridiculing what FS was doing, even drawing
analogy with long running cult film down at Central Sq; wasn't exactly
career enhancing activity). When FS implodes, there is mad rush to get
stuff back into the 370 product pipelines, including kicking off
quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

FS (failing) significantly accelerated the rise of the bureaucrats,
careerists, and MBAs .... From Ferguson & Morris, "Computer Wars: The
Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394

... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive

... snip ...

repeat, CEO Learson had tried (and failed) to block bureaucrats,
careerists, MBAs from destroying Watson culture & legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
20yrs later, IBM has one of the largest losses in the history of US
companies

In the wake of the FS implosion, the decision was changed to start
charging for kernel software and some of my stuff (for internal
datacenters) was selected to be initial guinea pig and I had to spend
some amount of time with lawyers and business people on kernel
software charging practices.

Application software practice was to forecast customer market at high,
medium and low price (the forecasted customer revenue had to cover
original development along with ongoing support, maintenance and new
development). It was a great culture shock for much of IBM software
development ... one solution was combining software packages,
enormously bloated projects with extremely efficient projects (for
"combined" forecast, efficient projects underwriting the extremely
bloated efforts).

Trivia: after FS implodes, the head of POK also managed to convince
corporate to kill the VM370 product, shutdown the development group
and transfer all the people to POK for MVS/XA (Endicott eventually
manages to save the VM370 product mission, but had to recreate a
development group from scratch). I was also con'ed into helping with a
16-processor tightly-coupled, multiprocessor effort and we got the
3033 processor engineers into working on it in their spare time (lot
more interesting that remapping 168 logic to 20% faster
chips). Everybody thought was great until somebody told the head of
POK it could be decades before POK favorite son operating system (MVS)
had effective 16-way support (i.e. IBM documentation at the time was
MVS 2-processor only had 1.2-1.5 times the throughput of single
processor). Head of POK then invited some of us to never visit POK
again and told the 3033 processor engineers, heads down and no
distractions (note: POK doesn't ship 16 processor system until after
the turn of the century).

Other trivia: Amdahl had won the battle to make ACS, 360 compatible
... folklore then was IBM executives killed ACS/360 because it would
advance the state of the art too fast and IBM would loose control of
the market. Amdahl then leaves IBM. Following has some ACS/360
features that don't show up until ES/9000 in the 90s:
https://people.computing.clemson.edu/~mark/acs_end.html

... note there were comments that if any other computer company had
dumped so much money into such an enormous failed (Future System)
project, they would have never survived (it took IBM another 20yrs
before it was about to become extinct)

one of the last nails in the Future System coffin was analysis by the
IBM Houston Science Center that if apps were moved from 370/195 to FS
machine made out of the fastest available hardware technology, they
would have throughput of 370/145 ... about 30 times slowdown.

23jun1969 IBM unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

HONE, APL, IBM 5100

From: Lynn Wheeler <lynn@garlic.com>
Subject: HONE, APL, IBM 5100
Date: 06 Aug, 2024
Blog: Facebook

23jun1969 unbundling announcement starts charging for (application)
software, SE services, maint. etc. SE training used to include part of
large group at customer site, but couldn't figure out how not to
charge for trainee SE time ... so was born "HONE", online branch
office access to CP67 datacenters, practicing with guest operating
systems in virtual machines. IBM Cambridge Science Center also did
port of APL\360 to CP67/CMS for CMS\APL (lots of fixes for workspaces
in large demand page virtual memory and APIs for system services like
file I/O, enabling lots of real world apps) and HONE started offering
CMS\APL-based sales&marketing support applications ... which comes to
dominate all HONE use.

HONE transitions from CP67/CMS to VM370/CMS (and VM370 APL\CMS done at
Palo Alto Science Center) and clone HONE installations start popping
up all over the world (HONE by far largest use of APL). PASC also does
the 370/145 APL microcode assist (claims runs APL as fast as on
370/168) and prototypes for what becomes 5100
https://en.wikipedia.org/wiki/IBM_5110
https://en.wikipedia.org/wiki/IBM_PALM_processor

The US HONE datacenters are also consolidated in Palo Alto (across the
back parking lot from PASC, trivia when FACEBOOK 1st moves into
Silicon Valley, it is new bldg built next door to the former US
consolidated HONE datacenter). US HONE systems are enhanced with
single-system image, loosely-coupled, shared DASD with load balancing
and fall-over support (at least as large as any airline ACP/TPF
installation) and then add 2nd processor to each system (16 processors
aggregate) ... ACP/TPF not getting two-processor support for another
decade. PASC helps (HONE) with lots of APL\CMS tweaks.

trivia: When I 1st joined IBM, one of my hobbies was enhanced
production operating systems for internal datacenters, and HONE was
long-time customer. One of my 1st IBM overseas trips was for HONE EMEA
install in Paris (La Defense, "Tour Franklin?" brand new bldg, still
brown dirt, not yet landscaped)

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
(internal) CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor support posts
https://www.garlic.com/~lynn/subtopic.html#smp

posts mentioning APL, PASC, PALM, 5100/5110:
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2015c.html#44 John Titor was right? IBM 5100
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100

--
virtualization experience starting Jan1968, online at home since Mar1970

TYMSHARE, ADVENTURE/games

From: Lynn Wheeler <lynn@garlic.com>
Subject: TYMSHARE, ADVENTURE/games
Date: 06 Aug, 2024
Blog: Facebook

One of the visits to TYMSHARE they demo'ed a game somebody had found
on Stanford PDP10 and ported it to VM370/CMS, I got a copy and made
executable available inside IBM ... and would send the source to
anybody that got all points .... within a sort period of time, new
versions with more points appeared as well as port to PLI.

We had argument with corporate auditors that directed all games had to
be removed from the system. At the time most company 3270 logon
screens included "For Business Purposes Only" ... our 3270 logon
screens said "For Management Approved Uses" (and claimed they were
human factors demo programs).

commercial virtual machine online service posts
https://www.garlic.com/~lynn/submain.html#online

some recent posts mentioning tymshare and adventure/games
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games

--
virtualization experience starting Jan1968, online at home since Mar1970

360/50 and CP-40

From: Lynn Wheeler <lynn@garlic.com>
Subject: 360/50 and CP-40
Date: 06 Aug, 2024
Blog: Facebook

IBM Cambridge Science Center had a similar problem, wanted to have a
360/50 to modify for virtual memory, but all the extra 360/50s were
going to FAA ATC ... and so they had to settle for 360/40 ... they
implemented virtual memory with associative array that held process-ID
and virtual page number for each real page (compared to Atlas
associative array, which just had virtual page number for each real
page...  effectively just single large virtual address space).
https://en.wikipedia.org/wiki/IBM_CP-40

the official IBM operating system for (standard virtual memory) 360/67
was TSS/360 which peaked around 1200 people at a time when the science
center had 12 people (that included secretary) morphing CP/40 into
CP/67.

Melinda's history website
http://www.leeandmelindavarian.com/Melinda#VMHist

trivia: FE had a bootstrap diagnostic process that started with
"scoping" components. With 3081 TCMs ... it was no longer possible to
scope ... so a system was written for UC processor (communication
group used for 37xx and other boxes) that implemented "service
processor" with probes into TCMs for diagnostic purposes (and a scope
could be used to diagnose the "service processor", bring it up and
then used to diagnose the 3081).

Moving to 3090, they decided on using 4331 running a highly modified
version of VM370 Release 6 with all the screens implemented in CMS
IOS3270. This was then upgraded to a pair of 4361s for service
processor. Your can sort of see this in the 3092 requiring a pair of
3370s FBA (one for each 4361) ... even for MVS systems that never had
FBA support
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

trivia: Early in rex (before renamed rexx and released to customers),
I wanted to show it wsn't just another pretty scripting language. I
decided to spend half time over three months reWriting large
assembler (large dump reader and diagnostic) application with ten
times the function and running ten times faster (some slight of hand
to make interpreted rex run faster than asm). I finished early so
wrote a library of automated routines that searched for common failure
signatures. For some reason it was never released to customers (even
though it was in use by nearly every internal datacenter and PSR)
... I did eventually get permission to give user group presentations
on how I did the implementation ... and eventually similar
implementations started to appear. Then the 3092 group asked if they
could ship it with the 3090 service processor; some old archive email
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

recent posts mentioning CP40
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#102 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2024c.html#65 More CPS
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024b.html#5 Vintage REXX
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#108 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2023c.html#105 IBM 360/40 and CP/40
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk Capacity and Channel Performance

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk Capacity and Channel Performance
Date: 07 Aug, 2024
Blog: Facebook

Original 3380 had 20 track spacings between each data track. That was
then cut in half giving twice the tracks(& cylinders) for double the
capacity, then spacing cut again for triple the capacity.

About then the father of 801/risc gets me to try and help him with a
disk "wide-head" ... transferring data in parallel with 16 closely
placed data tracks ... following servo tracks on each side (18 tracks
total). One of the problems was mainframe channels were still
3mbytes/sec and this required 50mbytes/sec. Then in 1988, the branch
office asks me to help LLNL (national lab) get some serial stuff they
were working with standardized, which quickly becomes fibre-channel
standard ("FCS", including some stuff I did in 1980 ... initially
1gbit/sec full-duplex, 200mbytes/sec aggregate). Later POK announces
some serial stuff that had been working on since at least 1980, with
ES/9000 as ESCON (when it was already obsolete, around 17mbytes/sec).

Then some POK engineers become involved with FCS and define a protocol
that radically reduces the throughput, eventually announced as
FICON. Latest public benchmark I've found is z196 "Peak I/O" that gets
2M IOPS over 104 FICON. About the same time a FCS was announced for
E5-2600 server blades claiming over million IOPS (two such FCS having
higher throughput than 104 FICON). Note IBM documentation also
recommended that SAPs (system assist processors that handle the actual
I/O) be kept to 70% CPU ... or around 1.5M IOPS. Somewhat complicating
matters is CKD DASD haven't been made for decades, all being simulated
using industry standard fixed-block disks.

re:
https://www.ibm.com/support/pages/system/files/inline-files/IBM%20z16%20FEx32S%20Performance_3.pdf

aka zHPF & TCW is closer to native FCS operation starting in 1988 (and
what I had done in 1980) ... trivia the hardware vendor tried to get
IBM to release my support in 1980 ... but the group in POK playing
with fiber were afraid it would make it harder to get their stuff
released (eventually a decade later as ESCON) ... and get it
vetoed. Old archived (bit.listserv.ibm-main) post from 2012 discussing
zHPF&TCW is closer to original 1988 FCS specification (and what I had
done in 1980). Also mentions throughput being throttled by SAP
processing https://www.garlic.com/~lynn/2012m.html#4

while working with FCS was also doing IBM HA/CMP product ... Nick
Dinofrio had approved HA/6000 project, originally for NYTimes to move
their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it
HA/CMP when I start doing technical/scientific cluster scale-up with
national labs and commercial cluster scale-up with RDBMS vendors
(Oracle, Sybase, Informix, Ingres) that had VAXcluster support in same
source base with UNIX (I do a distributed lock manager with
VAXCluster API semantics to ease the transition).

We were also using Hursley's 9333 in some configurations and I was
hoping to make it interoperable with FCS. Early jan1992, in meeting
with Oracle CEO, AWD/Hester tells Ellison that we would have
16-processor clusters by mid92 and 128-processor clusters by
ye92. Then late jan1992, cluster scale-up was transferred for IBM
Supercomputer (technical/scientific *ONLY*) and we were told we
couldn't work with anything having more than four processors. We leave
IBM a few months later. Then later find that 9333 evolves into SSA
instead:
https://en.wikipedia.org/wiki/Serial_Storage_Architecture

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

posts specifically mentioning 9333, SSA, FCS, FICON
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2022e.html#47 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2021k.html#127 SSA
https://www.garlic.com/~lynn/2021g.html#1 IBM ESCON Experience
https://www.garlic.com/~lynn/2019b.html#60 S/360
https://www.garlic.com/~lynn/2019b.html#57 HA/CMP, HA/6000, Harrier/9333, STK Iceberg & Adstar Seastar
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2013m.html#99 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#96 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013i.html#50 The Subroutine Call
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012k.html#77 ESCON
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011p.html#40 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#39 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010f.html#13 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2010f.html#7 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2009q.html#32 Mainframe running 1,500 Linux servers?
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#46 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2002j.html#15 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/95.html#13 SSA

--
virtualization experience starting Jan1968, online at home since Mar1970

After private equity takes over hospitals, they are less able to care for patients

From: Lynn Wheeler <lynn@garlic.com>
Subject: After private equity takes over hospitals, they are less able to care for patients
Date: 08 Aug, 2024
Blog: Facebook

After private equity takes over hospitals, they are less able to care
for patients, top medical researchers say. A study by physicians in
the Journal of the American Medical Association describes a pattern of
selling land, equipment and other resources after private equity
acquires hospitals.
https://www.nbcnews.com/investigations/private-equity-takes-over-hospitals-less-able-care-patients-jama-rcna164497

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#caitalism

KKR Founders Sued for Allegedly Getting Giant Payday for No
Work. Lawsuit adds to legal scrutiny of arcane tax deals benefiting
private-equity heavyweights
https://archive.ph/kqQvI

tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion

--
virtualization experience starting Jan1968, online at home since Mar1970

Public Facebook Mainframe Group

From: Lynn Wheeler <lynn@garlic.com>
Subject: Public Facebook Mainframe Group
Date: 08 Aug, 2024
Blog: Facebook

New to the group

I had taken 2credit hr intro to fortran/computers and at end of
semester was hired to rewrite 1401 MPIO for 360/30 (univ. was getting
360/67 for tss/360 replacing 709/1401, temporary got 360/30 replacing
1401 pending 360/67). Within year of taking intro class 360/67 arrives
and I was hired fulltime responsible for os/360 (tss/360 not
production so ran as 360/65). Before I graduate, I'm hired fulltime
into small group in Boeing CFO office to help with the formation of
Boeing Computer Services (consolidate all dataprocessing into an
independent business unit). I thot Renton datacenter largest in the
world, couple hundred million in 360s, 360/65s arriving faster than
they could be installed, boxes constantly staged in hallways around
machine room. Lots of politics between Renton director and CFO, who
only had a 360/30 up at Boeing field for payroll (although they
enlarge the machine room and install 360/67 for me to play with when
I'm not doing other stuff). When I graduate, I join IBM science center
(instead of staying with Boeing CFO). One of my hobbies after joining
IBM was enhanced production operating systems for internal
datacenters, including the online branch office sales&marketing
support HONE systems were long-time customers.

A little over decade ago, an IBM customer asks me to track down the
IBM decision to add virtual memory to all 370s and I find a staff
member to executive making the decision. Basically MVT storage
management was so bad that region sizes had to be specified four times
larger than used. As a result a typical 1mbyte 370/165 only ran four
concurrent regions, insufficient to keep system busy and
justified. Going to 16mbyte virtual memory (aka VS2/SVS) allowed the
number of regions to be increased by factor of four times (capped 1t
15 by 4bit storage protect key) with little or no paging ... sort of
like running MVT in a CP67 16mbyte virtual machine (Ludlow was doing
initial MVT->SVS on 360/67 ... simple build of 16mbyte tables, simple
paging code (little need for optimization with little or no
paging). Biggest effort was EXCP/SVC0 needed to make copy of channel
programs, substituting real addresses for virtual ... and he borrows
the CP67 CCWTRANS code for the implementation.

I had done dynamic adaptive resource manager ("wheeler" scheduler) for
CP67 at the univ, as undergraduate in the 60s. I started pontificating
that 360s had made trade-off between abundant I/O resources and
limited real storage and processor ... but by the mid-70s the
trade-off had started to invert. In the early 80s, I wrote that
between 360 announce and then, the relative system throughput of DASD
had declined by order of magnitude (disk got 3-5 times faster and
systems got 40-50 times faster). Some disk division executive took
exception and directed the division performance group to refute my
claims. After a few weeks they came back and essentially said that I
had slightly understated the problem. They then respun the analysis
for (user group) SHARE presentation configuring DASD for improved
throughput (16Aug1984, SHARE 63, B874). More recently there has been
observation that current memory access latency when measured in count
of processor cycles is similar to 60s DASD access latency when
measured in count of 60s processor cycles (memory is the new DASD).

In the 70s, as systems got bigger and mismatch between DASD throughput
and system throughput increased, increased concurrent process
execution was required and it was necessary to transition to separate
virtual address space for each "region" ... aka "MVS" (to get around
the 4bit storage key capping number at 15). This created a different
problem, OS360 was heavily pointer passing oriented ... so they mapped
an 8mbyte "image" of the MVS kernel into every (16mbyte) virtual
address space, leaving 8mbyte. Then because subsystems were moved into
separate address spaces, a 1mbyte common segment area ("CSA") was
mapped into every virtual address space for passing stuff back&forth
with subsystems (leaving 7mbyte). However, CSA requirements were
somewhat proportional to number of concurrently running "address
spaces" and number of subsystems and "CSA" quickly becomes "common
system area" (by 3033 it was running 5-6mbytes and threatening to
become 8mbyte, leaving zero for applications). Much of 370/XA was
specifically for addressing MVS shortcomings (head of POK had already
convinced corporate to kill the VM370 project, shutdown the
development group and transfer all the people to POK to work on
MVS/XA).

trivia: Boeing Huntsville had gotten a two-processor 360/67 system
with several 2250s for TSS/360 ... but configured them as two 360/65s
with MVT for 2250 CAD/CAM applications. They had already run into the
MVT storage management problems and had modified MVTR13 to run in
virtual memory mode (no paging but was able to leverage as partial
countermeasure to MVT storage management, precursor to decision to add
virtual memory to all 370s).

I had also been blamed for online computer conferencing (late
70s/early 80s) on the IBM internal network (larger than
ARPANET/Internet from just about the beginning until sometime mid/late
80s), which really took off the spring of 1981 when I distributed a
trip report visiting Jim Gray at Tandem (he had left IBM SJR the fall
before palming some number of things on me), only about 300 directly
participated but claims that 25,000 were reading; folklore is that
when corporate executive committee was told, 5of6 wanted to fire me
(possibly mitigating, lots of internal datacenters were running my
enhanced production operating systems).  One of the outcomes was a
researcher was hired to study how I communicated, they sat in the back
of my office for nine months taking notes on face-to-face, telephone,
got copies of all incoming/outgoing email and logs of all instant
messages; result were IBM reports, conference papers, books, and
Stanford PHD (joint with language and computer AI).

Abut the same time, I was introduced to John Boyd and would sponsor
his briefings at IBM. One of his stories was about being very vocal
that the electronics across the trail wouldn't work and (possibly as
punishment) was then put in command of "spook base". One of Boyd
biographies claims that "spook base" was $2.5B "wind fall" for IBM
(ten times Boeing Renton), would have helped to cover the cost of the
disastrous/failed early 70s IBM FS project.

89/90, the Commandant of the Marine Corps leverages Boyd for a Corps
make-over ... at a time when IBM was in desperate need of a
make-over. There has continued to be Military Strategy "Boyd
Conferences" at Quantico "MCU" for us after Boyd passed in 1997
(although former Commandant and I were frequently only ones present
that personally knew Boyd).

SHARE Original/Founding Knights of VM
http://mvmua.org/knights.html
IBM Mainframe Hall of Frame
https://www.enterprisesystemsmedia.com/mainframehalloffame
IBM System Mag article (some history details slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/

from when IBM Jargon was young and "Tandem Memos" was new
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Original SQL/Relational posts
https://www.garlic.com/~lynn/submain.html#systemr
Online computer conferencing ("Tandem Memo") posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
John Boyd posts and web refs
https://www.garlic.com/~lynn/subboyd.html

--
virtualization experience starting Jan1968, online at home since Mar1970

VMNETMAP

From: Lynn Wheeler <lynn@garlic.com>
Subject: VMNETMAP
Date: 08 Aug, 2024
Blog: Facebook

I had paper copy of 700 some ... and scanned it ... but lots of
non-electric stuff have gone in several down sizes over the
years. Image of desk ornament for the 1000 nodes.
1000th node globe

Archived post with some of the weekly distribution files passing 1000
nodes in 1983 (also generated list of all company locations that added
one or more nodes during 1983):
https://www.garlic.com/~lynn/2006k.html#8

part of map 1977 scan
1977 network map

coworker at the science center was responsible for the cambridge
wide-area network that morphs into the corporate network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s) and technology also used for the corporate sponsored univ BITNET
(also for a time larger than arpanet/internet)
https://en.wikipedia.org/wiki/BITNET

three people had invented GML at the science center in 1969 and GML
tag processing added to CMS SCRIPT, One of the GML inventors was
originally hired to promote the science center wide-area network:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

nearly all cp67 vnet/rscs ... and then vm370 vnet/rscs. NJI/NJE
simulation drivers were done for vnet/rscs so could connect HASP (&
then MVS/JES systems) but had to be very careful. The code had
originated in HASP ("TUCC" in cols 68-71) that used unused slots in
the 255 entry psuedo device table (usually around 160-180) and would
trash traffic where destination or origin nodes weren't in local table
... and the network had quickly passed 250 ... JES did get fix to
handle 999 nodes .... but it was after corporate network was well past
1000 nodes.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
posts mentioning HASP/ASP, JES2/JES, and/or NJE/NJI:
https://www.garlic.com/~lynn/submain.html#hasp
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

Edson (responsible for rscs/vnet)
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Other problems with JES: header intermixed job control and network
fields ... and exchanging traffic between MVS/JES systems at different
release levels would crash MVS. Eventually RSCS/VNET NJE/NJI drivers
were updated to recognize format required by directly connected JES
systems and if traffic originated from JES system at different release
level and reformat the fields to keep MVS from crashing. As a result
tended to keep MVS/JES systems hidden behind a VM370 RSCS/VNET
system. There is infamous case where MVS in Hursley were crashing
because San Jose had changed JES and Hursley VM370 RSCS/VNET group had
gotten the corresponding fixes.

Another problem was VTAM/JES had link time-out. STL (since renamed
SVL) was setting up a double-hop satellite link (up/down west/east
coast and up/down east coast/England) with Hursley (to use each other
systems offshift). They hooked it up and everything work fine. Then a
(MVS/JES biased) executive directed it to be between two JES systems
and nothing worked. They then switched back to RSCS/VNET and it worked
fine. The executive then claimed that RSCS/VNET was too dumb to know
it wasn't working. It turns out VTAM/JES had hard-coded round-trip
limit and double-hop satellite link round-trip time exceeded the
limit.

--
virtualization experience starting Jan1968, online at home since Mar1970

VMNETMAP

From: Lynn Wheeler <lynn@garlic.com>
Subject: VMNETMAP
Date: 08 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/2024e.html#25 VMNETMAP

3278 drift; 3277/3272 had hardware response of .086sec ... it was
followed with 3278 that moved a lot of electronics back into 3274
(reducing 3278 manufacturing cost) ... drastically increasing protocol
chatter and latency, increasing hardware response to .3-.5sec
(depending on amount of data). At the time there were studies showing
quarter sec response improved productivity. Some number of internal VM
datacenters were claiming quarter second system response ... but you
needed at least .164sec system response with 3277 terminal to get
quarter sec response for the person (I was shipping enhanced
production operating system getting .11sec system response). A
complaint written to the 3278 product administer got back a response
that 3278 wasn't for interactive computing but for "data entry" (aka
electronic keypunch). The MVS/TSO crowd never even noticed, it was a
really rare TSO operation that even saw 1sec system response. Later
IBM/PC 3277 hardware emulation card would get 4-5 times
upload/download throughput of 3278 card.

some posts mentioning 3277/3272 & 3278/3274 timings
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019c.html#4 3270 48th Birthday
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting

I took two credit hr intro to fortran/computers and at end of semester
was hired to rewrite 1401 MPIO for 360/30 ... univ shutdown datacenter
for weekend and I had it dedicated although 48hrs w/o sleep made
monday classes difficult. Within year of taking intro class the 709
was replace with 360/67 (originally intended for tss/360 but ran as
360/65) and I was hired fulltime responsible for OS/360.

along the way univ. library got ONR grant to do online catalog (some
of the money went for 2321 datacell). Project was also selected as
beta test for the original CICS program product ... and CICS support
was added to my tasks

posts mentioning CICS &/or BDAM
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

VMNETMAP

From: Lynn Wheeler <lynn@garlic.com>
Subject: VMNETMAP
Date: 08 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/2024e.html#25 VMNETMAP
https://www.garlic.com/2024e.html#26 VMNETMAP

parasite/story drift: small, compact cms apps ... 3270 terminal
emulation and hllapi like facility (predating ibm/pc) ... could login
local machine running script and/or dial to PVM (aka passthrough) and
login to remote machine and run commands. overview and examples
https://www.garlic.com/~lynn/2001k.html#35
story to retrieve RETAIN info
https://www.garlic.com/~lynn/2001k.html#36

author had also done VMSG cms email app, very early source version was
also picked up by PROFS group for their email client. when he tried to
offer them a much enhanced version, they tried to get him fired. He
then demonstrated that every PROFS email carried his initials in
non-displayed field. After that everything quieted down and he would
only share his source with me and one other person.

other posts mentioning Parasite/Story and VMSG
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#43 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2015d.html#12 HONE Shutdown
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014j.html#25 another question about TSO edit command
https://www.garlic.com/~lynn/2014h.html#71 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#49 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013d.html#66 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012d.html#17 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009q.html#66 spool file data
https://www.garlic.com/~lynn/2009q.html#4 Arpanet
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That

--
virtualization experience starting Jan1968, online at home since Mar1970

VMNETMAP

From: Lynn Wheeler <lynn@garlic.com>
Subject: VMNETMAP
Date: 08 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/2024e.html#25 VMNETMAP
https://www.garlic.com/2024e.html#26 VMNETMAP
https://www.garlic.com/2024e.html#27 VMNETMAP

In early 80s, I got HSDT project (T1 and faster computer links, both
terrestrial and satellite) and one of my first satellite T1 links was
between the Los Gatos lab on the west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in Kingston (on east coast) that had a bunch of floating point
system boxes (latest ones had 40mbyte/sec disk arrays).
https://en.wikipedia.org/wiki/Floating_Point_Systems

Was supporting both RSCS/VNET and TCP/IP and also having lots of
interferance with communication group who's VTAM boxes were capped at
56kbits/sec. Was also working with NSF director and was suppose to get
$20M to interconnect the NSF supercomputer centers; then congess cuts
the budget, some other things happened and eventually RFP was released
(in part based on what we already had running). From 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
is that 5of6 members of corporate executive committee wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

Communication group had been fabricating all sort of justifications
why they weren't supporting links faster than 56kbits/sec. They
eventually came out with 3737 to drive a (short-haul, terrestrial) T1
link ... with boatload of memory and Motorola 68k processors with
simulation pretending to be a CTCA VTAM to local mainframe
VTAM. Problem was VTAM "window pacing algorithm" had limit on
outstanding packets and even a short-haul T1 would absorb the full
packet limit before any replies began arriving (return ACKs would
eventually drop below the limit allowing additional packets to
transmit, but resulting only very small amount of T1 bandwidth would
be used). The local 3737 simulated CTCA VTAM would immediately ACK
packets, trying to keep host VTAM packets flowing ... and then use
different protocol to actually transmit packets to the remote 3737
(however it was only good for about 2mbits, compared to US T1
full-duplex 3mbits and EU T1 full-duplex 4mbits).

By comparison, HSDT ran dynamic adaptive rate-based pacing
(rather than window pacing) ... adjusting how fast (interval between)
packets sent to other end. If no packets were being dropped and rate
didn't have to be adjusted, then it would transmit as fast as the link
could transmit.

Corporate also required all links (leaving IBM premise) had to be
encrypted and I really hated would I had to pay for T1 link
encryptors and faster encryptors were really hard to find. I had
done some playing with software DES and found it ran about
150kbytes/sec on 3081 (aka both 3081 processors would be required to
handle software encryption for T1 full-duplex link). I then got
involved in doing link encryptor that could handle 3mbytes/sec
(not mbits) and cost less than $100 to build. Initially the corporate
DES group said it seriously weaken DES implementation. It took me
3months to figure out how to explain what was happening, but it was
hallow victory .... they said that there was only one organization
that was allowed to use such crypto ... we could make as many boxes as
we wanted but they would all have to be sent to that organization. It
was when I realized there was three kinds of crypto in the world 1)
the kind they don't care about, 2) the kind you can't do, 3) the kind
you can only do for them.

--
virtualization experience starting Jan1968, online at home since Mar1970

VMNETMAP

From: Lynn Wheeler <lynn@garlic.com>
Subject: VMNETMAP
Date: 09 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/2024e.html#25 VMNETMAP
https://www.garlic.com/2024e.html#26 VMNETMAP
https://www.garlic.com/2024e.html#27 VMNETMAP
https://www.garlic.com/2024e.html#28 VMNETMAP

In the 80s, communication group was getting the native RSCS/VNET
drivers restricted and just shipping simulated NJE/NJI drivers
supporting SNA (and univ BITNET was converting to TCP/IP instead)
... for awhile the internal corporate VNET continued to use the native
RSCS/VNET drivers because they had higher throughput.

Then communication group fabricated a story for the executive
committee that the internal RSCS/VNET had to all convert to SNA (or
otherwise PROFS would stop working). I had done a lot of work to get
RCSC/VNET drivers working at T1 speed and was scheduled to give
presentation at next corporate CJN backbone meeting. Then got email
that the communication group had got CJN meetings restricted to
managers only ... didn't want a lot of technical people confusing
decision makers with facts (as part of getting it converted to
SNA). Some old email in archived posts
https://www.garlic.com/~lynn/2006x.html#email870302
https://www.garlic.com/~lynn/2011.html#email870306

the communication group was also spreading internal misinformation
that SNA/VTAM could be used for NSFNET and somebody was collecting a
lot of that email and then forwarded it to us ... heavily clipped and
redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk Capacity and Channel Performance

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk Capacity and Channel Performance
Date: 09 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance

1980, STL (since renamed SVL) was bursting at the seams and they were
moving 300 from the IMS group (and their 3270s) to offsite bldg (about
halfway between STL and main plant site), with dataprocessing service
back to STL datacenter. They tried "remote" 3270 but found human
factors totally unacceptable. I then get con'ed into doing channel
extender support, locating channel-attached 3270 controllers at the
offsite bldg, resulting in no perceptible difference between off-site
and inside STL.

Then the hardware vendor tries to get IBM to release my support
... but there were some POK engineers playing with some serial stuff
and get it vetoed (afraid that if it was in the market, it would make
it harder to get their stuff release, which they eventually do a
decade later as ESCON, when it is already obsolete).

It turns out that STL had been spreading 3270 controllers across the
168&3033 channels with 3830 disk controllers. The channel-extender
support had significantly reduced channel-busy (getting 3270
controllers directly off IBM channels) for same amount of 3270
traffic, resulting in 10-15% improvement in system throughput. STL
then was considering putting all 3270 channel-attached controllers
behind channel-extender support.

channel-extender support
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

VMNETMAP

From: Lynn Wheeler <lynn@garlic.com>
Subject: VMNETMAP
Date: 09 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/2024e.html#25 VMNETMAP
https://www.garlic.com/2024e.html#26 VMNETMAP
https://www.garlic.com/2024e.html#27 VMNETMAP
https://www.garlic.com/2024e.html#28 VMNETMAP
https://www.garlic.com/2024e.html#29 VMNETMAP

I had first done channel-extender support in 1980 for STL moving 300
people to off-site bldg (which made use of T3 collins digital
radio). Then IBM in Boulder was moving a lot of people to bldg across
heavy traffic highway. They wanted to use infrared modem on roofs of
the two bldgs (eliminated lots of gov. permission) ... however there
were snide remarks that Boulder weather would adversely affect the
signal. I had fireberd bit-error testers on 56kbit subchannel and did
see some bit drops during white-out snow storm when nobody was able to
get into work (I had written Turbo Pascal program for PC/ATs that
supported up to four ascii inputs from bit error testers for keeping
machine readable logs).

The big problem was sunshine ... heating of the bldgs on one side
during the day (resulting in imperceptible lean), slightly changed the
focus of the infrared beam between the two (roof mounted) modems. It
took some amount of trial and error to compensate for bldg sunshine
heating.

Then my HSDT project got custom built 3-node Ku-band satellite system (4.5m
dishes in Los Gatos and Yorktown and 7m dish in Austin) with
transponder on SBS4. Yorktown community meeting complained about
radiation from the 25watt signal. It was pointed out if they were hung
directly above dish transmission, it would be significantly less
radiation than they were currently getting from local FM radio tower
transmission.

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

--
virtualization experience starting Jan1968, online at home since Mar1970

The Irrationality of Markets

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Irrationality of Markets
Date: 09 Aug, 2024
Blog: Facebook

The Irrationality of Markets
https://www.nakedcapitalism.com/2024/08/the-irrationality-of-markets.html

Commodities market use to have rule that only entities with
significant holdings could play ... because speculators resulted in
wild, irrational, volatility (they live on vlatility, bet on direction
of market change and then manipulate to see it happens, bet/pump&dump
going up and then bet on going down). GRIFTOPIA
https://www.amazon.com/Griftopia-Machines-Vampire-Breaking-America-ebook/dp/B003F3FJS2/
has chapter on commodity market secret letters allowing specific
speculators to play, resulting in the huge oil spike summer of
2008. Fall of 2008, member of congress released details of speculator
transactions responsible for the huge oil price spike/drop ... and the
press, instead of commending the congressman, pillared him for
violating privacy of those responsible.

old interview mentions the illegal activity goes on all the time in
equity markets (even before HFT)
https://nypost.com/2007/03/20/cramer-reveals-a-bit-too-much/

SEC Caught Dark Pool and High Speed Traders Doing Bad Stuff
https://web.archive.org/web/20140608215213/http://www.bloombergview.com/articles/2014-06-06/sec-caught-dark-pool-and-high-speed-traders-doing-bad-stuff

Fast money: the battle against the high frequency traders; A 'flash
crash' can knock a trillion dollars off the stock market in minutes as
elite traders fleece the little guys. So why aren't the regulators
stepping in? We talk to the legendary lawyer preparing for an epic
showdown
http://www.theguardian.com/business/2014/jun/07/inside-murky-world-high-frequency-trading

The SEC's Mary Jo White Punts on High Frequency Trading and Abandons
Securities Act of 1934
http://www.nakedcapitalism.com/2014/06/secs-mary-jo-white-punts-high-frequency-trading-abandons-securities-act-1934.html

griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

posts mentioning illegal activity and high frequency trading
https://www.garlic.com/~lynn/2022b.html#96 Oil and gas lobbyists are using Ukraine to push for a drilling free-for-all in the US
https://www.garlic.com/~lynn/2021k.html#96 'Most Americans Today Believe the Stock Market Is Rigged, and They're Right'
https://www.garlic.com/~lynn/2019b.html#11 For The Average Investor, The Next Bear Market Will Likely Be The Last
https://www.garlic.com/~lynn/2018f.html#105 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#104 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2017c.html#22 How do BIG WEBSITES work?
https://www.garlic.com/~lynn/2015g.html#47 seveneves
https://www.garlic.com/~lynn/2015g.html#46 seveneves
https://www.garlic.com/~lynn/2015c.html#17 Robots have been running the US stock market, and the government is finally taking control
https://www.garlic.com/~lynn/2015.html#58 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2014g.html#109 SEC Caught Dark Pool and High Speed Traders Doing Bad Stuff
https://www.garlic.com/~lynn/2014f.html#20 HFT, computer trading
https://www.garlic.com/~lynn/2014e.html#72 Three Expensive Milliseconds
https://www.garlic.com/~lynn/2014e.html#18 FBI Investigates High-Speed Trading
https://www.garlic.com/~lynn/2013d.html#54 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012c.html#13 Study links ultrafast machine trading with risk of crash
https://www.garlic.com/~lynn/2011l.html#21 HOLLOW STATES and a CRISIS OF CAPITALISM

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 138/148

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 138/148
Date: 09 Aug, 2024
Blog: Facebook

after Future System implodes, Endicott asks me to help with
Virgil/Tully (aka 138/148), they wanted to do microcode assist
(competitive advantage especially in world trade). I was told there
was 6kbytes microcode space and 6kbytes of 370 instructions would
approx. translate into 6kbytes of microcode instructions running ten
times faster. I was to identify the 6kbytes of highest executed kernel
code. old archive post with initial analysis
https://www.garliic.com/~lynn/94.html#21

basically 6kbytes accounted for 79.55% of kernel execution (and moved
to microcode would run ten times faster). Then they wanted me to run
around the world presenting business case to US & world-trade business
planners and forecasters. I was told that US region forecasters got
promoted for forecasting what ever corporate told them was strategic
and world-trade forecasters could get fired for bad forecasts. One of
the differences was bad US region forecasts had to be "eaten" by the
factory while world-trade forecasts, factories shipped to the ordering
country (factories tended to redo US region forecasts to be on the
safe side). In any case the US region 138/148 forecasts were it didn't
make any difference the features, they would sell some percent more
than 135/145. On the other hand, the world-trade forecasters said
without distinct/unique features they wouldn't sell any 138/148s
because of competition with the clone 370 makers.

Then Endicott tried to convince corporate that VM370 be pre-installed
on every 138/148 system (somewhat like current PR/SM & LPAR). This was
in period when head of POK was convincing corporate to kill the VM370
product, shutdown the development group and transfer all the people to
POK for MVS/XA ... and Endicott was having to fight just to preserve
VM370. Endicott wasn't able to get permission to ship every 138/148
with VM370 pre-installed, but they managed to save the VM370 product
mission (for the mid-range), but had to recreate a development group
from scratch.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

some other posts mentioning 138/148 ecps
https://www.garlic.com/~lynn/2023f.html#57 Vintage IBM 370/125
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2012d.html#70 Mainframe System 370
https://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs?

--
virtualization experience starting Jan1968, online at home since Mar1970

VMNETMAP

From: Lynn Wheeler <lynn@garlic.com>
Subject: VMNETMAP
Date: 09 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/2024e.html#25 VMNETMAP
https://www.garlic.com/2024e.html#26 VMNETMAP
https://www.garlic.com/2024e.html#27 VMNETMAP
https://www.garlic.com/2024e.html#28 VMNETMAP
https://www.garlic.com/2024e.html#29 VMNETMAP
https://www.garlic.com/2024e.html#31 VMNETMAP

One of the supposed strings attached to the HSDT funding was I was
suppose to show some IBM content. CPD did have T1 2701 in the 60s
... but then possibly got stuck at 56kbits because of VTAM
shortcomings. I was able to find the S/1 Zirpel T1 card from FSD
(apparently for some of the gov. customers that had failing 2701s). I
go to order a few S/1 and find that there was year's backlog,
apparently the recently purchased ROLM had ordered a whole slew of S/1
creating the backlog. I knew the director of ROLM datacenter from
their IBM days and managed to cut a deal for some S/1 order positions
if I would help ROLM with some of their problems.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some posts that mentioning zirpel, rolm, s/1
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#84 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2006n.html#25 sorting was: The System/360 Model 20 Wasn't As Bad As All That

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk Capacity and Channel Performance

From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk Capacity and Channel Performance
Date: 10 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024e.html#30 Disk Capacity and Channel Performance

when I transfer out to SJR, got to wander around datacenters in
silicon valley, including bldg14&5 (disk engineering and product
test) across the street. Bldg14 had multiple layers of physical
security, including machine room had each development box inside heavy
mesh locked cage ("testcells"). They were running 7x24, pre-scheduled,
stand-alone testing and had mentioned that they had recently tried MVS
... but it had 15min mean-time-between-failure (in that environment)
requiring manual re-ipl. I offered to rewrite I/O supervisor making it
bullet proof and never fail, allowing any amount of on-demand,
concurrent testing greatly improving productivity. The downside was
they got in habit of blaming my software for problems and I would have
to spending increasing amount of time playing disk engineer diagnosing
their problems.

Then bldg15 got 1st engineering 3033 outside POK processor engineering
flr. Since testing only took percent or two of the processor, we
scrounged up a 3830 disk controller and string of 3330 disk and setup
our own private online service (and ran 3270 coax under the street to
my 3277 in sjr/28). One monday morning get a call asking what had I
had done over the weekend to destroy 3033 throughput. After some
back&forth eventually discover that somebody had replaced the 3830
disk controller with engineering 3880 controller. The engineers had
been complaining about how the bean counters had dictated that 3880
have really slow (inexpensive?) microprocessor for most of operations
(other than actual data transfers) ... it really slowed down
operations. To partially mask how slow it really was, they were trying
to present end-of-operation interrupt early, hoping that they could
finish up overlapped with software interrupt handling (wasn't working,
software was attempting to redrive with queued I/O, controller then
had to respond with control unit busy (SM+BUSY), requeue the attempted
redrive, and wait for control unit end (longer elapsed time, higher
channel busy, and higher software overhead).

The early interrupt was tuned to not cause as bad a problem ... but
the higher channel busy still persisted. Later the 3090/trout had
configured number of channels for target system throughput (assuming
3880 was same as 3830 but with 3380 3mbyte data transfer). When they
found out how bad 3880 was, they realized they had to greatly increase
the number of channels (to meet throughput target), which also
required an additional TCM (semi-facetious they said they would bill
the 3880 business for the higher 3090 manufacturing costs). Marketing
eventually respins the large increase in 3090 channels as great I/O
machine (rather than countermeasure for the increased 3880
channel busy).

I had also written an (internal only) research report on the I/O
reliability work for bldgs14&15 and happen to mention the MVS
15min MTBF ... bringing the wrath of the MVS organization down on my
head. Later, just before 3880/3380 was about to ship, FE had a
regression test of 57 simulated errors that were likely to occur. In
all 57 cases, MVS was still crashing (requiring re-ipl) and in 2/3rds
of the case, no indication of what caused the failure. I didn't feel
bad about it.

bldg26 long 1story ... mostly machine room lots of mvs systems
... long gone. engineering, bldg14 two story across strt ... last time
checked it was one of the few still standing ... machine room 2nd flr
... with testcell wire cages. after earthquake bldgs under went
earthquake remediation ... when it came to bldg14, engineering was
temporarily relocated to non-ibm bldg. south of main plant site

getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

some recent posts mentioning 3090 had to greatly increase
number of channels
https://www.garlic.com/~lynn/2024d.html#95 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#26 Vintage 370/158
https://www.garlic.com/~lynn/2023f.html#62 Why Do Mainframes Still Exist
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#66 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021.html#60 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper

--
virtualization experience starting Jan1968, online at home since Mar1970

Implicit Versus Explicit "Run" Command

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Implicit Versus Explicit "Run" Command
Newsgroups: alt.folklore.computers
Date: Sat, 10 Aug 2024 13:05:43 -1000

Lawrence D'Oliveiro <ldo@nz.invalid> writes:

When Gary Kildall created CP/M, he was heavily influenced by DEC OSes,
in terms of file/device specs etc. Yet he didn't copy the RUN command:
instead, he brought in the Unix-style interpretation of the first word
as the name of a file to be implicitly run. I don't think original
CP/M had the "search path" concept (without directories, there
wouldn't have been any point), but it did instead try different
filename extensions, namely .COM for machine-code executables and .BAT
for files containing CLI commands. And the basics of this idea flowed
through into MS-DOS.

re:
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024e.html#16 50 years ago, CP/M started the microcomputer revolution

Gary was at NPG school and working on IBM CP67/CMS ... some of the MIT
7094/CTSS people had gone to 5th flr for multics and others went to
the 4th flr to IBM science center and did CP40/CMS ... CMS (cambridge
monitor system) was originally developed to run on real 360/40
... while they were doing 360/40 hardware mods to support virtual
memory ... and then ran in "CP40" (control program 360/40) virtual
machines (CP40/CMS morphs into CP67/CMS when 360/67 standard with
virtual memory became available, later morphs into vm370/cms ... where
"cms" becomes conversational monitor system).

CMS search order as "P" filesysteam originally 2311 (then minidisk
when CP40 virtual machines became available) ... more details:
https://bitsavers.org/pdf/ibm/360/cp67/GY20-0591-1_CMS_PLM_Oct1971.pdf

filesystem/disks "pg4" (PDF13), "P" (primary user), "S" (system), then
possibly "A, B, & C" user files, "T" (temporary/work).

filesystem/disk search order "pg34" (PDF43): P, T, A, B, S, C

Execution control "119" (PDF130) executable, type: "TEXT" (output of
compiler/assembler), "EXEC" (aka shell scripts), and "MODULE" (memory
image) ... if just filename specified (& not type specified) ... it
searches for filename for each type ... in filesystem/disk order
... until match found.

ibm science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

other posts referencing CMS PLM
https://www.garlic.com/~lynn/2017j.html#51 IBM 1403 Printer Characters
https://www.garlic.com/~lynn/2013h.html#85 Before the PC: IBM invents virtualisation
https://www.garlic.com/~lynn/2013h.html#75 cp67 & vm370

--
virtualization experience starting Jan1968, online at home since Mar1970

Gene Amdahl

From: Lynn Wheeler <lynn@garlic.com>
Subject: Gene Amdahl
Date: 10 Aug, 2024
Blog: Linkedin

Amdahl had won the battle to make ACS 360-compatible. Folklore is then
executives were afraid it would advance the state of the art too fast
and IBM would loose control of the market, and it is killed, Amdahl
leaves shortly later (lists some features that show up more than two
decades later with ES/9000)
https://people.computing.clemson.edu/~mark/acs_end.html

Not long after Amdahl leaves, IBM has Future System effort, completely
different from 370 and was going to completely replace 370; internal
politics during FS was killing off 370 projects, the dearth of new
370s during FS, is credited with giving the clone 370s makers
(including Amdahl) their market foothold
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
one of the last nails in the FS coffin was IBM Houston Science Center
analysis if 370/195 applications were redone for a FS machine made out
of the fastest hardware technology available, they would have
throughput of 370/145 (about 30 times slowdown)

When I join IBM one of my hobbies was enhanced production operating
systems for internal datacenters and the online branch office
sales&market support HONE systems were long time customer ... and all
during FS, I continued to work on 360/370 and would periodically
ridicule what FS was doing (which wasn't exactly career
enhancing). Then the US HONE systems were consolidated in Palo Alto
with single-system-image, loosely-coupled, shared DASD including
load-balancing and fall-over across the complex.

In the morph of CP67->VM370, lots of stuff was simplified and/or
dropped (including tightly-coupled, shared memory multiprocessing). I
then add SMP/shared-memory support back to VM370, initially for US
HONE so they could add 2nd processor to each system (for 16 processors
total and am getting each 2processor system throughput twice single
processor).

When FS finally implodes there is mad rush to get stuff back into 370
product pipelines, including kicking off the quick&dirty 3033&3081
efforts in parallel. I also get roped into helping with a 16processor,
tightly-coupled, shared memory 370 multiprocessor and we con the 3033
processor engineers into working on it in their spare time (lot more
interesting than remapping 370/168 logic to 20% faster chips).

At first everybody thought it was great until somebody tells the head
of POK that it could be decades before POK favorite son operating
system (i.e. "MVS") had (effective) 16-way support (at the time, MVS
documents had 2processor SMP with only 1.2-1.5 times the throughput of
single processor, aka MVS multiprocessor overhead ... note POK doesn't
ship a 16processor system until after the turn of the century). Then
head of POK invites some of us to never visit POK again and directs
3033 processor heads down, no distractions.

3081 originally was going to be multiprocessor only ... and each 3081D
processor was suppose to be faster than 3033 ... but several
benchmarks were showing them slower. Then the processor cache size is
doubled for 3081K and the aggregate 2-processor MIP rate was about the
same as the Amdahl single processor (and MVS 3081K throughput much
less because of its SMP overhead, aka approx .6-.75 of a Amdahl single
processor).

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

Gene Amdahl

From: Lynn Wheeler <lynn@garlic.com>
Subject: Gene Amdahl
Date: 11 Aug, 2024
Blog: Linkedin

re:
https://www.garlic.com/~lynn/2022e.html#37 Gene Amdahl

SHARE Original/Founding Knights of VM
http://mvmua.org/knights.html
IBM Mainframe Hall of Frame
https://www.enterprisesystemsmedia.com/mainframehalloffame
IBM System Mag article (some history details slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/

late 70s and early 80s, I was also blamed for online computer
conferencing on the internal network (larger than arpanet/internet
from just about the beginning until sometime mid/late 80s), folklore
was that when corporate executive committee was told, 5of6 wanted to
fire me; it had really taken off spring of 1981 when I distributed
trip report of visit to jim gray (departed IBM SJR for Tandem fall
1980), only about 300 participated but claims that 25,000 was reading,
from when IBM Jargon was young and "Tandem Memos" was new
https://comlay.net/ibmjarg.pdf

Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.

... snip ...

online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008
Date: 11 Aug, 2024
Blog: Facebook

The Forgotten History of the Financial Crisis. What the World Should
Have Learned in 2008
https://archive.ph/71aYu

"September and October of 2008 was the worst financial crisis in
global history, including the Great Depression." Ben Bernanke, then
the chair of the U.S. Federal Reserve, made this remarkable claim in
November 2009, just one year after the meltdown. Looking back today, a
decade after the crisis, there is every reason to agree with
Bernanke's assessment: 2008 should serve as a warning of the scale and
speed with which global financial crises can unfold in the
twenty-first century.

... snip ...

Financial houses 2001-2008 did over $27T in securitizing
mortgages/loans; securitizing loans/mortgages, paying for triple-A
rating (when rating agencies knew they weren't worth triple-A, from
Oct2008 congressional hearings) and selling into the bond
market. YE2008, just the four largest too-big-to-fail were still
carrying $5.2T in offbook toxic CDOs.

Jan1999 I was asked to help prevent the coming economic mess. I was
told some investment bankers had walked away "clean" from the "S&L
Crisis", where then running Internet IPO mills (invest a few million,
"hype", then IPO for a couple billion, needed to fail to leave field
clear for next round; were predicted to next get into securitized
mortgages/lonas). I was to help improve the integrity of securitzed
loan/mortgage supporting documents (as countermeasure). Then they
found they could start doing no-document, liar mortgage/loans,
securitize, pay for triple-A, and sell into the bond market ("no
documents", "no integrity")

Then they found they could start doing securitized mortgage/loans
designed to fail and then take out CDS gambling bets. The largest
holder of CDS gambling bets was AIG and negotiating to pay off at
50cents on the dollar. Then SECTREAS steps in and says they had to
sign document that they couldn't sue those making the bets and take
TARP funds to pay off at 100cents on the dollar. The largest recipient
of TARP funds was AIG and the largest recipient of face value payoffs
was firm formally headed by SECTREAS (note with only $700B in TARP
funds, it would have hardly made at dent in the toxic CDO problem
... the real too big to fail bailout had to be done by FEDRES).

Later found some of the too-big-to-fail were money laundering for
terrorists and drug cartels (various stories it enabled drug cartels
to buy military grade equipment largely responsible for violence on
both sides of the border). There would be repeated "deferred
prosecution" (promising never to do again, each time) ... supposedly
if they repeated they would be prosecuting (but apparent previous
violations were consistently ignored). Gave rise to
too-big-to-prosecute and too-big-to-jail ... in addition to
too-big-to-fail.
https://en.wikipedia.org/wiki/Deferred_prosecution

For Big Companies, Felony Convictions Are a Mere Footnote. Boeing
guilty plea highlights how corporate convictions rarely have
consequences that threaten the business
https://archive.ph/if7H0

The accounting firm Arthur Andersen collapsed in 2002 after
prosecutors indicted the company for shredding evidence related to its
audits of failed energy conglomerate Enron. For years after Andersen's
demise, prosecutors held back from indicting major corporations,
fearing they would kill the firm in the process.

... snip ...

The Sarbanes-Oxley joke was that congress felt so badly about the
Anderson collapse that they really increased the audit requirements
for public companies (as gift to the audit industry). The rhetoric on
flr of congress was that SOX would prevent future ENRONs and guarantee
executives & auditors did jail time ... however it required SEC to do
something. GAO did analysis of public company fraudulent financial
filings showing that it even increased after SOX went into effect (and
nobody doing jail time).
http://www.gao.gov/products/GAO-03-138
http://www.gao.gov/products/GAO-06-678
http://www.gao.gov/products/GAO-06-1053R

The other observation was possibly the only part of SOX that was going
to make some difference might be the informants/whistle-blowers
(folklore was one of congressional members involved in SOX had been
former FBI involved in taking down organized crime and supposedly what
made if possible were informants/whistle-blowers) ... and SEC had a
1-800 for companies to complain about audits, but no whistle-blower
hot line.

The head of the administration after turn of century presided over
letting the financial responsibility act expire (spending couldn't
exceed revenue, on its way to eliminating all federal debt), huge cut
in taxes (1st time taxes were cut to NOT pay for two wars), huge
increase in spending, explosion in debt (2010 CBO report, 2003-2009
taxes cut by $6T and spending increased $6T for a $12T gap compared to
fiscal responsible budget), the economic mess (70 times larger than
his father's 80s S&L Crisis) and the forever wars, Cheney is VP,
Rumsfeld is SECDEF (again) and one of the Team B members is deputy
SECDEF (and major architect of Iraq policy).

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
federal reserve chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
(triple-A rated) toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
too big to fail (too big to prosecute, too big to jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
TARP posts
https://www.garlic.com/~lynn/submisc.html#tarp
ZIRP posts
https://www.garlic.com/~lynn/submisc.html#zirp
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes.oxley
whistle-blower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower
ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
financial reporting fraud posts
https://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
Team B posts
https://www.garlic.com/~lynn/submisc.html#team.b
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Instruction Tracing

From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: Instruction Tracing
Newsgroups: comp.arch
Date: Sun, 11 Aug 2024 16:58:03 -1000

John Levine <johnl@taugh.com> writes:

I also heard that the ROMP was originally intended for some sort of
word processor from the Office Products division (the O in ROMP) and
was repurposed into a workstation.

late 70s, circa 1980 ... there were efforts to convert myriad of CISC
microprocessors to 801/RISC microprocessors ... Iliad for low/mid
range 370s (4361/4381 following on to 4331/4341), ROMP
(research/office products) for Displaywriter follow-on (with CP.r
operating system implemented in PL.8), also AS/400 follow-on to
S/38. For various reasons these efforts floundered and some of the
801/RISC engineers left IBM for other vendors.

I helped with white paper that showed that nearly whole 370 could be
implemened directly in circuits (much more efficient than microcode)
for 4361/4381. AS/400 returned to CISC microprocessor. The follow-on
to displaywriter was canceled (most of that market moving to IBM/PC
and other personal computers).

Austin group decided to pivot ROMP to Unix workstation market and got
the company that had done AT&T UNIX port to IBM/PC as PC/IX to do one
for ROMP (AIX, possibly "Austin IX" for PC/RT). They also had to do
something with the 200 or so PL.8 programmers and decided to use them
to implement an "abstract" virtual machine as "VRM" ... telling the
company doing the UNIX port that it would be much more efficient and
timely for them to implement to the VRM interface (rather than bare
hardware).  Besides other issues with that claim, it introduced new
problem that new device drivers had to be done twice, one in "C" for
the unix/AIX layer and then in "PL.8" for the VRM.

Palo Alto was working on a port of UCB BSD to 370 and got redirected
to port to the PC/RT ... they demonstrated that they did the BSD port
to ROMP directly ... with much less effort than either the VRM
implementation or the AIX implementation ... released as "AOS".

trivia: early 80s 1) IBM Los Gatos lab was working on single chip
"Blue Iliad", 1st 32bit 801/RISC, really hot, single large chip that
never quite came to fruition and 2) IBM Boeblingen lab had done ROMAN,
3chip 370 implemention (with performance of 370/168). I had proposal
to see how many chips I could cram into single rack (either "Blue
Iliad" or ROMAN or combination of both).

While AS/400 1st reverted to CISC chip ... later in the 90s, out of
the Somerset (AIM, apple, ibm, motorola) single-chip power/pc ... they
got a 801/RISC chip to move to.

801/risc, iliad, romp, rios, pc/rt, power, power/px
https://www.garlic.com/~lynn/subtopic.html#801

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 12 Aug, 2024
Blog: Facebook

trivia: when NCSA complained about their use of "MOSAIC" what silicon
valley company did they get "NETSCAPE" from???

I was told CISCO transferred it to them ... supposedly as part of
promoting/expanding TCP/IP and the Internet.

I had been brought in as consultant responsible for webserver to
payment networks. Two former Oracle people (that I had worked with on
HA/CMP and RDBMS cluster scaleup) were there responsible for for
something called "commerce server" and wanted to do payment
transactions, they also wanted to use "SSL" ... result now frequently
called "electronic commerce". I did a talk on "Internet Isn't Business
Critical Dataprocessing" based on the software, processes, and
documentation I had to do (Postel sponsored talk at ISI/USC).

large mall paradigm supporting multiple stores ... originally funded
by telco that was looking for it being service offering ... had
conventional leased line for transactions into payment networks. then
netscape wanted to offer individual ecommerce webservers with
transactions through the internet to payment gateway into payment
networks.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
virtualization experience starting Jan1968, online at home since Mar1970

The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008
Date: 12 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#39 The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008

Note: CIA Director Colby wouldn't approve "Team B" report/analysis
that had inflated Soviet military, part of justifying large DOD budget
increase. White House Chief of Staff Rumsfeld then gets Colby replaced
with Bush1 (who would approve it), after which Rumsfeld resigns and
becomes SECDEF (and Rumsfeld's assistant Cheney becomes Chief of
Staff).
https://en.wikipedia.org/wiki/Team_B

Then Bush1 becomes VP and claims he knows nothing about
https://en.wikipedia.org/wiki/Iran%E2%80%93Contra_affair
because he was full-time administration point person deregulating the
financial industry causing the S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of his family
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260

and Bush1 and Rumsfeld are also working with Saddam, supporting Iraq
in
http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War
including supplying WMDs
http://en.wikipedia.org/wiki/United_States_support_for_Iraq_during_the_Iran%E2%80%93Iraq_war

In the early 90s, Bush1 is president and Cheney is SECDEF. Sat. photo
recon analyst told white house that Saddam was marshaling forces to
invade Kuwait. White house said that saddam would do no such thing and
proceeded to discredit the analyst. Later the analyst informed the
white house that saddam was marshaling forces to invade Saudi Arabia,
now the white house has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/

This century, Bush2 is president, Cheney is VP, Rumsfeld is SECDEF
(again), and one of the "Team B" members is deputy SECDEF and credited
with the Iraq policy
https://en.wikipedia.org/wiki/Paul_Wolfowitz

Cousin of White House Chief of Staff Card, was dealing with the Iraqis
at the UN and was given evidence that WMDs (tracing back to US in the
Iran/Iraq war) had been decommissioned. the cousin shared this with
Card, Powell and others ... then is locked up in military hospital,
book was published in 2010 (before decommissioned WMDs were
declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

NY Times series from 2014, the decommission WMDs (tracing back to US
from Iran/Iraq war), had been found early in the invasion, but the
information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html

... military-industrial-complex wanted a war so badly that corporate
reps were telling former eastern block countries that if they voted
for IRAQ2 invasion in the UN, they would get membership in NATO and
(directed appropriation) USAID (can *ONLY* be used for purchase of
modern US arms). From the law of unintended consequences, the invaders
were told to bypass ammo dumps looking for WMDs, when they got around
to going back, over a million metric tons had evaporated (later
showing up in IEDs).
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

Military-Industrial(-Congressional) Complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
Team B posts
https://www.garlic.com/~lynn/submisc.html#team.b
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 12 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#41 Netscape

trivia: for the transactions through the internet had redundant
gateways with multiple connections into various (strategic) locations
around the network. I wanted to do router updates ... but backbone was
in the process of transition to hierarchical routing ... so had to
make do with multiple (DNS) "A-records". I then was giving class to
20-30 recent graduate paper millionaire employees (mostly working on
browser) on business critical dataprocessing ... including A-record
operation and show support examples from BSD4.3 reno/tahoe clients
... and getting push back that it was too complex. Then I started
making references to if it wasn't in Steven's book, they wouldn't do
it. It took me a year to get multiple A-record support into the
browser.

One of the first e-store was large sporting good operation that was
doing TV advertising during weekend national football games ... this
was in period when ISPs were still doing weekend rolling maintenance
downtime windows. And even with their webserver having multiple
connections to different parts of the internet, there were browsers
that couldn't get to connection (because of missing multiple A-record
support).

payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning netscape and multiple a-records
https://www.garlic.com/~lynn/2017h.html#47 Aug. 9, 1995: When the Future Looked Bright for Netscape
https://www.garlic.com/~lynn/2009o.html#40 The Web browser turns 15: A look back;
https://www.garlic.com/~lynn/2005i.html#9 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS
https://www.garlic.com/~lynn/aepay4.htm#comcert17 Merchant Comfort Certificates

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 13 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024e.html#43 Netscape

trivia: some of the MIT CTSS/7094 people had gone to the 5th flr for
Multics, others went to to the science center on 4th flr to do virtual
machines and bunch of online stuff. science center wanted a 360/50 to
hardware modify with virtual memory, but all the extra 50s were going
to the FAA ATC product, so they had to settle for 360/40. They did
"CMS" on the bare 369/40 hardware in parallel with the 40 hardware
mods for virtual memory and development of virtual machine CP40 (then
CMS is moved to virtual machine and it is CP40/CMS). Then when 360/67
comes available standard with virtual memory, CP40/CMS morphs into
CP67/CMS. Later CP67/CMS morphs into VM370/CMS (after decision to add
virtual memory to all 370 machines).

GML is invented at the science center in 1969 and GML tag support is
added to SCRIPT (which was rewrite of CTSS RUNOFF for CMS). A decade
later, GML morphs into ISO standard SGML and after another decade
morphs into HTML at CERN. The 1st webserver in the US is at (CERN
sister site) Stanford SLAC on its VM370/CMS system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

After I had joined science center one of my hobbies had been enhanced
production operating systems for internal datacenters and one of my
long term customers was the online sales and marketing support HONE
systems ... 1st CP67/CMS then morphed into VM370/CMS and all US HONE
datacenters (in parallel with clone HONE datacenters were also
cropping up all over the world) were consolidated in Palo Alto,
single-system-image, shared DASD with load-balancing and fall-over
across the large complex of multiprocessors (trivia: when FACEBOOK 1st
moves into silicon valley, it is into a new bldg built next door to
the former consolidated US HONE datacenter). I had also transferred to
San Jose Research and we would have monthly user group meetings at
SLAC.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 13 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024e.html#43 Netscape
https://www.garlic.com/~lynn/2024e.html#44 Netscape

some internet trivia: primary person (before) inventing GML (in 1969),
was hired to promote Cambridge's CP67 wide-area network (RSCS/VNET,
which morphs into the internal corporate network, larger than
arpanet/internet from just about the beginning until sometime mid/late
80s, technology also used for corporate sponsored Univ BITNET)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm

Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.

... snip ...

Some of us transfer out to SJR in 2nd half of 70s (including Edson
responsible for RSCS/VNET & CP67 wide-area network). In early 80s, I
get HSDT project, T1 and faster computer links (both terrestrial and
satellite) and was working with NSF director; was suppose to get $20M
to interconnect the NSF supercomputer centers. Then congress cuts the
budget, some other things happen and eventually a RFP is released (in
part based on what we already had running) .From 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12

The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.

... snip ...

... NCSA (national center supercomputer applications) got some of the
funding
http://www.ncsa.illinois.edu/enabling/mosaic

IBM internal politics was not allowing us to bid (being blamed for
online computer conferencing inside IBM likely contributed, folklore
is that 5of6 members of corporate executive committee wanted to fire
me). The NSF director tried to help by writing the company a letter
(3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and
director of Research, copying IBM CEO) with support from other
gov. agencies ... but that just made the internal politics worse (as
did claims that what we already had operational was at least 5yrs
ahead of the winning bid), as regional networks connect in, it becomes
the NSFNET backbone, precursor to modern internet.

Edson,
https://en.wikipedia.org/wiki/Edson_Hendricks

In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.

... snip ...

SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 13 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024e.html#43 Netscape
https://www.garlic.com/~lynn/2024e.html#44 Netscape
https://www.garlic.com/~lynn/2024e.html#45 Netscape

possibly more than you ever wanted to know:

23june1969, IBM unbundling, starting to charge for software, services,
maint., tramatic for mainstream software development; requirement that
revenue covers original development, ongoing development&support; had
forecast process for number of customers at low, medium and high price
... some mainstream software couldn't meet revenue at any forecasted
price. One was batch (MVS) system JES2 offering network support; NJE.

RSCS/VNET would meet requirement (with large profit) even at lowest
possible monthly price of $30/month. RSCS/VNET had done an emulated
NJE driver to allow connection of MVS batch systems in internal
network. However NJE had some short comings, code had come from HASP
and had "TUCC" in cols 68-71 (from univ. where it originated), it used
spare entries in the 255 table of pseudo (spool) devices (usually
around 160-180) and would trash traffic where origin and/or
destination weren't in local table. As a result internal JES2/NJE
systems had to be restricted to boundary nodes (hidden behind
RSCS/VNET filter) since internal network was approaching 700 at the
time.

Also the early 70s "Future System" project had recently imploded
(different from 360/370 and was going to completely replace it, during
FS internal politics had been killing off 370 efforts) and there was
mad rush to get stuff back into the 370 product pipelines. Also the
head of POK (mainstream batch operation) had managed to convince
corporate to kill VM370/CMS product, shutdown the development group
and transfer all the people to POK for MVS/XA ... and was veto'ing any
announcement of RSCS/VNET (for customers)

JES2 rides in as savior(?), if RSCS/VNET could be announced as a
joint/combined product with (MVS) JES2/NJE, each at $600/month ... the
enormous projected RSCS/VNET revenue (especially at $600/month) could
be used to meet MVS JES2/NJE revenue requirement. The Endicott lab
(entry/mid range 370s) had managed to save the VM370 product mission
(for mid-range 370s) but had to recreate a development group from
scratch. Then there was big explosion in (Endicott mid-range) VM/4341s
... large corporations with orders for hundreds of machines at a time
for placing out in departmental areas (sort of the leading edge of the
coming distributed computing tsunami). Also, Jan1979 I get con'ed into
doing some benchmarks on a engineeering VM/4341 for national lab
looking at getting 70 for a compute farm (sort of leading edge of the
coming cluster supercomputing tsunami).

some of bitnet (1981) could also be credited to price/performance and
explosion in vm/4341s
https://en.wikipedia.org/wiki/BITNET

.... inside IBM, all the (internal) new vm/4341s helped push the
internal network over 1000 in 1983. I was having constant HSDT (T1 and
faster computer links, both TCP/IP and non-SNA RSCS/VNET) battles with
the communication product group. IBM had 2701 supporting T1 in 60s
... but in the 70s, mainstream move to SNA/VTAM appeared to cap all
the standard products at 56kbits (because of VTAM short comings?)

23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HASP, ASP, JES2, JES3, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp

--
virtualization experience starting Jan1968, online at home since Mar1970

Netscape

From: Lynn Wheeler <lynn@garlic.com>
Subject: Netscape
Date: 13 Aug, 2024
Blog: Facebook

re:
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024e.html#43 Netscape
https://www.garlic.com/~lynn/2024e.html#44 Netscape
https://www.garlic.com/~lynn/2024e.html#45 Netscape
https://www.garlic.com/~lynn/2024e.html#46 Netscape

Other 4341 folklore: IBM communication group was blocking release of
mainframe TCP/IP support, part of fiercely fighting off distributed
computing and client/server (trying to preserve their dumb terminal
paradigm). When that got overturned, they changed their tactic and
declared since they had corporate strategic responsibility for
everything that crossed datacenter walls, it had to be released
through them. What shipped got aggregate of 44kbytes using nearly
whole 3090 processor. I then do support for RFC1044 and in some tuning
tests at Cray Research between Cray and 4341, get sustained 4341
channel throughput using only modest amount of 4341 processor
(something like 500 times improvement in bytes moved per instruction
executed).

RFC1044 support posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

PROFS

From: Lynn Wheeler <lynn@garlic.com>
Subject: PROFS
Date: 13 Aug, 2024
Blog: Facebook

PROFS group was collecting internal apps for wrapping 3270 menus
around.

Hot topic at Friday's after work was some killer app that would
attract mostly computer illiterate employees. One was email
... another was online telephone book. Jim Gray would spend one week
doing telephone lookup app ... lookup had to be much faster than
reaching for paper book on the desk and finding the number ... and I
was to spend a week collecting organization softcopy of printed paper
books to reformat into telephone book format.

There was a rapidly spreading rumor that members of the executive
committee were communicating via email. This was back when 3270
terminals were part of annual budget and required VP level sign-off
... and then find mid-level executives rerouting 3270 deliveries to
their desks (and their administrative assistant) ... to make it appear
like they might be computer literate. There were enormous numbers of
3270 that spent their life with the VM370 logon logo (or possibly PROF
menu) being burned into screen (with admin actually handling things
like email). This continued at least through some of the 90s with
executives rerouting PS2/486 and 8514 screens to their desks (partly
used for 3270 emulation & burning VM370 logo or PROFS menu and partly
status symbols)

PROFS got source for very early VMSG for the email client ... then
when VMSG author tried to offer them a much more mature and enhanced
version ... an attempt was made to fire him. The whole thing quieted
down when he showed all PROFS messages had his initials in every PROFS
email (non-displayed field). After that he only shared his source with
me and one other person.

Later I remember somebody claiming that when congress demanded all
executive branch PROFS notes involving CONTRA .... had to find
somebody with every possible clearance to scan the backup tapes.

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some posts mentioning profs, vmsg, contra
https://www.garlic.com/~lynn/2022f.html#64 Trump received subpoena before FBI search of Mar-a-lago home
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2019d.html#96 PROFS and Internal Network
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2018.html#18 IBM Profs
https://www.garlic.com/~lynn/2017b.html#74 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#76 PROFS
https://www.garlic.com/~lynn/2014.html#13 Al-Qaeda-linked force captures Fallujah amid rise in violence in Iraq
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013f.html#69 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012e.html#55 Just for a laugh... How to spot an old IBMer
https://www.garlic.com/~lynn/2012d.html#47 You Don't Need a Cyber Attack to Take Down The North American Power Grid
https://www.garlic.com/~lynn/2011i.html#6 Robert Morris, man who helped develop Unix, dies at 78
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011e.html#57 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2002h.html#64 history of CMS

--
virtualization experience starting Jan1968, online at home since Mar1970

OSI and XTP

From: Lynn Wheeler <lynn@garlic.com>
Subject: OSI and XTP
Date: 13 Aug, 2024
Blog: Facebook

At Interop '88, I was surprised at all the OSI booths.

I was on Greg Chesson's XTP TAB ... and some gov. groups were involved
... so took it to ISO charted ANSI X3S3.3 for layer 4&3, as
"HSP". Eventually X3S3.3 said that ISO required that standards work
can only be done for protocols that conform to OSI model ... XTP
didn't qualify because 1) supported internetworking (aka TCP/IP) which
doesn't exist in OSI, 2) skipped layer 4/3 interface, and 3) went
directly to LAN MAC interface, doesn't exist in OSI model.

Then joke was that IETF required at least two interoperable
implementations before proceeding in standards while ISO didn't even
require a standard to be implementable.

internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, index - home