From: Lynn Wheeler <lynn@garlic.com> Subject: 2314 Disks Date: 29 Jul, 2024 Blog: FacebookIBM Wilshire Showroom
when I graduated and join science center, had 360/67 with increasing banks of 2314, quickly grew to five 8+1 banks and a 5 bank (for 45 drives) ... the CSC FE then painted each bank panel door with a different color ... to help operator mapping disk mount request address to a 2314 bank.
one of hobbies after joining IBM was enhanced production operating systems for internal datacenters and online branch office sales&marketing suppprt HONE systems was long time customer (1st cp67/cms then vm370/cms) ... most frequently stopped by HONE 1133 westchester and wilshire blvd (3424?) ... before all US HONE datacenters were consolidated in Palo Alto (trivia: when facebook 1st moves into silicon valley it is new bldg built next to the former US consolidated HONE datacenter). First US consolidated operation was single-system-image, loosely-coupled, shared DASD with load-balancing and fall-over across the complex (one of the largest in the world, some similar airlines TPF operations), then a 2nd processor was added to each system (making it the largest, TPF didn't get SMP multiprocessor support for another decade).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, loosely-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Posts mentioning CSC & 45 2314 drives
https://www.garlic.com/~lynn/2021c.html#72 I/O processors, What could cause a comeback for big-endianism very slowly?
https://www.garlic.com/~lynn/2019.html#51 3090/3880 trivia
https://www.garlic.com/~lynn/2013d.html#50 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012n.html#60 The IBM mainframe has been the backbone of most of the world's largest IT organizations for more than 48 years
https://www.garlic.com/~lynn/2011.html#16 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2003b.html#14 Disk drives as commodities. Was Re: Yamhill
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: TYMSHARE Dialup Date: 30 Jul, 2024 Blog: FacebookIBM 2741 dialup at home, Mar1970-Jun1977, replaced by CDI Miniterm.
Note Aug1976, TYMSHARE starts offering their VM370/CMS-based online
computer conferencing to (user group) SHARE as VMSHARE ... archives
here:
http://vm.marist.edu/~vmshare
I cut deal w/TYMSHARE to get a monthly tape dump of all VMSHARE (and later PCSHARE) files for putting up on internal network and systems ... one of biggest problems was lawyers concern that internal employees would be contaminated by unfiltered customer information.
Much later with M/D acquiring TYMSHARE, I was brought in to review GNOSIS for the spinoff:
Ann Hardy at Computer History Museum
https://www.computerhistory.org/collections/catalog/102717167
Ann rose up to become Vice President of the Integrated Systems
Division at Tymshare, from 1976 to 1984, which did online airline
reservations, home banking, and other applications. When Tymshare was
acquired by McDonnell-Douglas in 1984, Ann's position as a female VP
became untenable, and was eased out of the company by being encouraged
to spin out Gnosis, a secure, capabilities-based operating system
developed at Tymshare. Ann founded Key Logic, with funding from Gene
Amdahl, which produced KeyKOS, based on Gnosis, for IBM and Amdahl
mainframes. After closing Key Logic, Ann became a consultant, leading
to her cofounding Agorics with members of Ted Nelson's Xanadu project.
... snip ...
Ann Hardy
https://medium.com/chmcore/someone-elses-computer-the-prehistory-of-cloud-computing-bca25645f89
Ann Hardy is a crucial figure in the story of Tymshare and
time-sharing. She began programming in the 1950s, developing software
for the IBM Stretch supercomputer. Frustrated at the lack of
opportunity and pay inequality for women at IBM -- at one point she
discovered she was paid less than half of what the lowest-paid man
reporting to her was paid -- Hardy left to study at the University of
California, Berkeley, and then joined the Lawrence Livermore National
Laboratory in 1962. At the lab, one of her projects involved an early
and surprisingly successful time-sharing operating system.
... snip ...
If Discrimination, Then Branch: Ann Hardy's Contributions to Computing
https://computerhistory.org/blog/if-discrimination-then-branch-ann-hardy-s-contributions-to-computing/
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
online virtual machine commercial services
https://www.garlic.com/~lynn/submain.html#online
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: DASD CKD Date: 30 Jul, 2024 Blog: FacebookCKD was trade-off with i/o capacity and mainframe memory in mid-60s ... but the mid-70s trade-off started to flip. IBM 3370 FBA in the late 70s and then all disks started to migrate to fixed-block (can be seen in 3380 records/track formulas, record size had to be rounded up to fixed "cell size"). Currently there haven't been any CKD DASD made for decades, all being simulated on industry standard fixed-block disks).
took two credit hr intro to fortran/computers, end of the semester hired to rewrite 1401 MPIO for 360/30. Univ getting 360/67 for tss/360. to replace 709/1401 and temporary got 360/30 (that had 1401 microcode emulation) to replace 1401, pending arrival of 360/67 (Univ shutdown datacenter on weekend, and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard). Within a year of taking intro class, 360/67 showed up and I was hired fulltime responsibility for OS/360 (tss/360 didn't come to production, so ran as 360/65 with os/360) ... and I continued to have my dedicated weekend time. Student fortran ran under second on 709 (tape to tape), but initial over a minute on 360/65. I install HASP and it cuts time in half. I then start revamping stage2 sysgen to place datasets and PDS members to optimize disk seek and multi-track searches, cutting another 2/3rds to 12.9secs; never got better than 709 until I install univ of waterloo WATFOR.
My 1st SYSGEN was R9.5MFT, then started redoing stage2 sysgen for R11MFT. MVT shows up with R12 but I didn't do MVT gen until R15/16 (15/16 disk format shows up being able to specify VTOC cyl ... aka place other than cyl0 to reduce avg. arm seek).
Bob Bemer history page (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20180402200149/http://www.bobbemer.com/HISTORY.HTM
360s were originally to be ASCII machines ... but the ASCII unit
record gear wasn't ready ... so had to use old tab BCD gear (and
EBCDIC) ... biggest computer goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
Learson named in the "biggest computer goof ever" ... then he is CEO
and tried (and failed) to block the bureaucrats, careerists and MBAs
from destroying Watson culture and legacy ... then 20yrs later, IBM
has one of the largest losses in the history of US companies and was
being reorged into the 13 "baby blues" in preparation for breaking up
the company
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
DASD CKD, FBA, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd
getting to play disk engineer in bldg 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
some recent posts mentioning ASCII and/or Bob Bemer
https://www.garlic.com/~lynn/2024d.html#107 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#105 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#33 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#16 CTSS, Multicis, CP67/CMS
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#14 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#113 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#81 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#59 Vintage HSDT
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#102 EBCDIC Card Punch Format
https://www.garlic.com/~lynn/2024.html#100 Multicians
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#31 MIT Area Computing
https://www.garlic.com/~lynn/2024.html#26 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: TYMSHARE Dialup Date: 30 Jul, 2024 Blog: Facebookre:
CSC comes out to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs, precursor to VM370), which I mostly got to play with during my weekend dedicated time. First few months I mostly spent rewriting CP67 pathlengths for running os/360 in virtual machine; test os/360 stream 322secs, initially ran 856secs virtually; CP67 CPU 534secs got down to CP67 CPU 113secs. CP67 had 1052 & 2741 support with automatic terminal type identification (controller SAD CCW to switch port scanner terminal type). Univ had some TTY terminals so I added TTY support integrated with automatic terminal type.
I then wanted to have single dial-up number ("hunt group") for all
terminals ... but IBM had taken short-cut and hardwired port
line-speed ... which kicks off univ. program to build clone
controller, building channel interface board for Interdata/3,
programmed to emulate IBM controller with inclusion of automatic line
speed. Later upgraded to Interdata/4 for channel interface and cluster
of Interdata/3s for port interfaces. Four of us get written up
responsible for (some part of) IBM clone controller business
... initially sold by Interdata and then by Perkin-Elmer
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
Turn of century, tour of datacenter and a descendant of the Interdata telecommunication controller handling majority of all credit card swipe dial-up terminals east of the mississippi.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Private Equity Date: 31 Jul, 2024 Blog: Facebookre:
When private equity buys a hospital, assets shrink, new research
finds. The study comes as U.S. regulators investigate the industry's
profit-taking and its effect on patient care.
https://archive.ph/ClHZ5
Private Equity Professionals Are 'Fighting Fires' in Their Portfolios,
Slowing Down the Recovery. At the same time, "the interest rate spike
has raised the stakes of holding an asset longer," says Bain & Co.
https://www.institutionalinvestor.com/article/2dkcwhdzmq3njso767d34/portfolio/private-equity-professionals-are-fighting-fires-in-their-portfolios-slowing-down-the-recovery
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: For Big Companies, Felony Convictions Are a Mere Footnote Date: 31 Jul, 2024 Blog: FacebookFor Big Companies, Felony Convictions Are a Mere Footnote. Boeing guilty plea highlights how corporate convictions rarely have consequences that threaten the business
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
some posts mentioning Boeing (& fraud):
https://www.garlic.com/~lynn/2019d.html#42 Defense contractors aren't securing sensitive information, watchdog finds
https://www.garlic.com/~lynn/2018d.html#37 Imagining a Cyber Surprise: How Might China Use Stolen OPM Records to Target Trust?
https://www.garlic.com/~lynn/2018c.html#26 DoD watchdog: Air Force failed to effectively manage F-22 modernization
https://www.garlic.com/~lynn/2017h.html#55 Pareto efficiency
https://www.garlic.com/~lynn/2017h.html#54 Pareto efficiency
https://www.garlic.com/~lynn/2015f.html#42 No, the F-35 Can't Fight at Long Range, Either
https://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd
https://www.garlic.com/~lynn/2012g.html#3 Quitting Top IBM Salespeople Say They Are Leaving In Droves
https://www.garlic.com/~lynn/2011f.html#88 Court OKs Firing of Boeing Computer-Security Whistleblowers
https://www.garlic.com/~lynn/2010f.html#75 Is Security a Curse for the Cloud Computing Industry?
https://www.garlic.com/~lynn/2007c.html#18 Securing financial transactions a high priority for 2007
https://www.garlic.com/~lynn/2007.html#5 Securing financial transactions a high priority for 2007
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 2314 Disks Date: 31 Jul, 2024 Blog: Facebookre:
trivia: ACP/TPF got a 3830 symbolic lock RPQ for loosely-coupled operation (sort of like the later DEC VAXCluster implementation) much faster than reserve/release protocol ... but was limited to four system operation, disk division discontinued since it conflicted with string switch requiring two 3830s). HONE did an interesting HACK that simulated processor compare&swap semantics instruction, that worked across string switch ... so extended to 8 system operation (and with SMP was 16 processor).
archived email with ACP/TPF 3830 disk controller lock RPQ details
... only serializes I/O for channels connected to same 3830
controller.
https://www.garlic.com/~lynn/2008i.html#email800325
in this post which has a little detail about HONE I/O that simulates
the processor compare-and-swap instruction semantics (works across
string switch and multiple disk controllers)
https://www.garlic.com/~lynn/2008i.html#39
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor (and some compare-and-swap)
https://www.garlic.com/~lynn/subtopic.html#smp
other posts getting to play disk engineer in bldgs 14&15
https://www.garlic.com/~lynn/subtopic.html#disk
recent posts mentionine HONE, loosely-coupled, single-system-image
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#92 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#112 Multithreading
https://www.garlic.com/~lynn/2024c.html#90 Gordon Bell
https://www.garlic.com/~lynn/2024b.html#72 Vintage Internet and Vintage APL
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2024b.html#26 HA/CMP
https://www.garlic.com/~lynn/2024b.html#18 IBM 5100
https://www.garlic.com/~lynn/2024.html#112 IBM User Group SHARE
https://www.garlic.com/~lynn/2024.html#88 IBM 360
https://www.garlic.com/~lynn/2024.html#78 Mainframe Performance Optimization
https://www.garlic.com/~lynn/2023g.html#72 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#30 Vintage IBM OS/VU
https://www.garlic.com/~lynn/2023g.html#6 Vintage Future System
https://www.garlic.com/~lynn/2023f.html#92 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#41 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023e.html#87 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023e.html#75 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2023d.html#106 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#90 IBM 3083
https://www.garlic.com/~lynn/2023d.html#23 VM370, SMP, HONE
https://www.garlic.com/~lynn/2023c.html#77 IBM Big Blue, True Blue, Bleed Blue
https://www.garlic.com/~lynn/2023c.html#10 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#80 IBM 158-3 (& 4341)
https://www.garlic.com/~lynn/2023.html#91 IBM 4341
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#2 360/91
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#61 Datacenter Vulnerability
https://www.garlic.com/~lynn/2022f.html#59 The Man That Helped Change IBM
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#30 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#50 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#62 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022b.html#8 Porting APL to CP67/CMS
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2022.html#81 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2021k.html#23 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2021j.html#108 168 Loosely-Coupled Configuration
https://www.garlic.com/~lynn/2021j.html#25 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021f.html#30 IBM HSDT & HA/CMP
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2021d.html#43 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021b.html#80 AT&T Long-lines
https://www.garlic.com/~lynn/2021b.html#15 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#86 IBM Auditors and Games
https://www.garlic.com/~lynn/2021.html#74 Airline Reservation System
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: For Big Companies, Felony Convictions Are a Mere Footnote Date: 31 Jul, 2024 Blog: Facebookre:
The rest is
The accounting firm Arthur Andersen collapsed in 2002 after
prosecutors indicted the company for shredding evidence related to its
audits of failed energy conglomerate Enron. For years after Andersen's
demise, prosecutors held back from indicting major corporations,
fearing they would kill the firm in the process.
... snip ...
The Sarbanes-Oxley joke was that congress felt so badly about the
Anderson collapse that they really increased the audit requirements
for public companies. The rhetoric on flr of congress were claims that
SOX would prevent future ENRONs and guarantee executives &
auditors did jail time ... however it required SEC to do
something. GAO did analysis of public company fraudulent financial
filings showing that it even increased after SOX went into effect (and
nobody doing jail time).
http://www.gao.gov/products/GAO-03-138
http://www.gao.gov/products/GAO-06-678
http://www.gao.gov/products/GAO-06-1053R
The other observation was possibly the only part of SOX that was going to make some difference might be the informants/whistleblowers (folklore was one of congressional members involved in SOX had been former FBI involved in taking down organized crime and supposedly what made if possible were informants/whistleblowers).
Something similar showed up with the economic mess, financial houses 2001-2008 did over $27T in securitizing mortgages/loans; aka paying for triple-A rating (when rating agencies knew they weren't worth triple-A, from Oct2008 congressional hearings) and selling into the bond market. YE2008, just the four largest too-big-to-fail were still carrying $5.2T in offbook toxic CDOs.
Then found some of the too-big-to-fail were money laundering
for terrorists and drug cartels (various stories it enabled drug
cartels to buy military grade equipment largely responsible for
violence on both sides of the border). There would be repeated
"deferred prosecution" (promising never to do again, each time)
... supposedly if they repeated they would be prosecuting (but
apparent previous violations were consistently ignored). Gave rise to
too-big-to-prosecute and too-big-to-jail ... in addition to
too-big-to-fail.
https://en.wikipedia.org/wiki/Deferred_prosecution
trivia: 1999: I was asked to try and help block (we failed) the coming economic mess; 2004: I was invited to annual conference of EU CEOs and heads of financial exchanges, that year's theme was EU companies, that dealt with US companies, were being forced into performing SOX audits (aka I was there to discuss effectiveness of SOX).
ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
whistleblower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower
Fraudulent Financial Filing posts
https://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
too-big-to-fail (too-big-to-prosecute, too-big-to-jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
(offbook) toxic CDOs
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
money laundering posts
https://www.garlic.com/~lynn/submisc.html#money.laundering
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
some posts mentioning deferred prosecution
https://www.garlic.com/~lynn/2024d.html#59 Too-Big-To-Fail Money Laundering
https://www.garlic.com/~lynn/2024.html#58 Sales of US-Made Guns and Weapons, Including US Army-Issued Ones, Are Under Spotlight in Mexico Again
https://www.garlic.com/~lynn/2024.html#19 Huge Number of Migrants Highlights Border Crisis
https://www.garlic.com/~lynn/2022h.html#89 As US-style corporate leniency deals for bribery and corruption go global, repeat offenders are on the rise
https://www.garlic.com/~lynn/2021k.html#73 Wall Street Has Deployed a Dirty Tricks Playbook Against Whistleblowers for Decades, Now the Secrets Are Spilling Out
https://www.garlic.com/~lynn/2018e.html#111 Pigs Want To Feed at the Trough Again: Bernanke, Geithner and Paulson Use Crisis Anniversary to Ask for More Bailout Powers
https://www.garlic.com/~lynn/2018d.html#60 Dirty Money, Shiny Architecture
https://www.garlic.com/~lynn/2017h.html#56 Feds WIMP
https://www.garlic.com/~lynn/2017b.html#39 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017b.html#13 Trump to sign cyber security order
https://www.garlic.com/~lynn/2017.html#45 Western Union Admits Anti-Money Laundering and Consumer Fraud Violations, Forfeits $586 Million in Settlement with Justice Department and Federal Trade Commission
https://www.garlic.com/~lynn/2016e.html#109 Why Aren't Any Bankers in Prison for Causing the Financial Crisis?
https://www.garlic.com/~lynn/2016c.html#99 Why Is the Obama Administration Trying to Keep 11,000 Documents Sealed?
https://www.garlic.com/~lynn/2016c.html#41 Qbasic
https://www.garlic.com/~lynn/2016c.html#29 Qbasic
https://www.garlic.com/~lynn/2016b.html#73 Qbasic
https://www.garlic.com/~lynn/2016b.html#0 Thanks Obama
https://www.garlic.com/~lynn/2016.html#36 I Feel Old
https://www.garlic.com/~lynn/2016.html#10 25 Years: How the Web began
https://www.garlic.com/~lynn/2015h.html#65 Economic Mess
https://www.garlic.com/~lynn/2015h.html#47 rationality
https://www.garlic.com/~lynn/2015h.html#44 rationality
https://www.garlic.com/~lynn/2015h.html#31 Talk of Criminally Prosecuting Corporations Up, Actual Prosecutions Down
https://www.garlic.com/~lynn/2015f.html#61 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#57 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015f.html#37 LIBOR: History's Largest Financial Crime that the WSJ and NYT Would Like You to Forget
https://www.garlic.com/~lynn/2015f.html#36 Eric Holder, Wall Street Double Agent, Comes in From the Cold
https://www.garlic.com/~lynn/2015e.html#47 Do we REALLY NEED all this regulatory oversight?
https://www.garlic.com/~lynn/2015e.html#44 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#23 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015d.html#80 Greedy Banks Nailed With $5 BILLION+ Fine For Fraud And Corruption
https://www.garlic.com/~lynn/2014i.html#10 Instead of focusing on big fines, law enforcement should seek long prison terms for the responsible executives
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Ampere Arm Server CPUs To Get 512 Cores, AI Accelerator Date: 31 Jul, 2024 Blog: FacebookAmpereOne Aurora In Development With Up To 512 Cores, AmpereOne Prices Published
There was comparison of IBM max configured z196, 80processors, 50BIPS, $30M ($600,000/BIPS) and IBM E5-2600 server blades, 16processors, 500BIPS, base list price $1815 ($3.63/BIPS). Note BIPS benchmark is number of iterations of program compared to the reference platform (not actual count of instructions). At the time, large cloud operations claimed that they were assembling their own server blades for 1/3rd brand named servers ($605, $1.21/BIPS and 500BIPS, ten times BIPS of max. configured z196 at 1/500000 the price/BIPS). Then there were articles that the server chip vendors were shipping at least half their product directly to large cloud operators (that assemble their own servers). Shortly later, IBM sells-off its server product line.
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
some posts mentioning IBM z196/e5-2600
https://www.garlic.com/~lynn/2023g.html#40 Vintage Mainframe
https://www.garlic.com/~lynn/2022h.html#112 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022b.html#63 Mainframes
https://www.garlic.com/~lynn/2021j.html#56 IBM and Cloud Computing
https://www.garlic.com/~lynn/2021i.html#92 How IBM lost the cloud
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014f.html#67 Is end of mainframe near ?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Saudi Arabia and 9/11 Date: 31 Jul, 2024 Blog: FacebookSaudi Arabia and 9/11
After 9/11, victims were prohibited from suing Saudi Arabia for responsibility, that wasn't lifted until 2013, some recent progress in holding Saudi Arabia accountable
New Claims of Saudi Role in 9/11 Bring Victims' Families Back to Court
in Lawsuit Against Riyadh
https://www.nysun.com/article/new-claims-of-saudi-role-in-9-11-bring-victims-families-back-to-court-in-lawsuit-against-riyadh
Lawyers for Saudi Arabia seek dismissal of claims it supported the
Sept. 11 hijackers
https://abcnews.go.com/US/wireStory/lawyers-saudi-arabia-seek-dismissal-claims-supported-sept-112458690
September 11th families suing Saudi Arabia back in federal court in
Lower Manhattan, New York City
https://abc7ny.com/post/september-11th-families-suing-saudi-arabia-back-federal-court-lower-manhattan-new-york-city/15126848/
Video: 'Wow, shocking': '9/11 Justice' president reacts to report on
possible Saudi involvement in 9/11
https://www.cnn.com/2024/07/31/us/video/saudi-arabia-9-11-report-eagleson-lead-digvid
9/11 defendants reach plea deal with Defense Department in Saudi
Arabia lawsuit
https://www.fox5ny.com/news/9-11-justice-families-saudi-arabia-lawsuit-hearing-attacks
9/11 families furious over plea deal for terror mastermind on same day
Saudi lawsuit before judge
https://www.bostonherald.com/2024/07/31/9-11-families-furious-over-plea-deal-for-terror-mastermind-on-same-day-saudi-lawsuit-before-judge/
Judge hears evidence against Saudi Arabia in 9/11 families lawsuit
https://www.newsnationnow.com/world/9-11-families-news-conference-saudi-lawsuit-hearing/
9/11 families furious over plea deal for terror mastermind on same day
Saudi lawsuit goes before judge | Nation World
https://www.rv-times.com/nation_world/9-11-families-furious-over-plea-deal-for-terror-mastermind-on-same-day-saudi-lawsuit/article_762df686-9b9e-58df-9140-30268848e252.html
Latest news some of the recent 9/11 Saudi Arabia material wasn't released by US gov. but obtained from the British gov.
U.S. Signals It Will Release Some Still-Secret Files on Saudi Arabia
and 9/11
https://www.nytimes.com/2021/08/09/us/politics/sept-11-saudi-arabia-biden.html
Democratic senators increase pressure to declassify 9/11 documents
related to Saudi role in attacks
https://thehill.com/policy/national-security/566547-democratic-senators-increase-pressure-to-declassify-9-11-documents/
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds
some past posts mentioning Saudi Arabia and 9/11
https://www.garlic.com/~lynn/2020.html#22 The Saudi Connection: Inside the 9/11 Case That Divided the F.B.I
https://www.garlic.com/~lynn/2019e.html#143 "Undeniable Evidence": Explosive Classified Docs Reveal Afghan War Mass Deception
https://www.garlic.com/~lynn/2019e.html#85 Just and Unjust Wars
https://www.garlic.com/~lynn/2019e.html#70 Since 2001 We Have Spent $32 Million Per Hour on War
https://www.garlic.com/~lynn/2019e.html#67 Profit propaganda ads witch-hunt era
https://www.garlic.com/~lynn/2019d.html#99 Trump claims he's the messiah. Maybe he should quit while he's ahead
https://www.garlic.com/~lynn/2019d.html#79 Bretton Woods Institutions: Enforcers, Not Saviours?
https://www.garlic.com/~lynn/2019d.html#54 Global Warming and U.S. National Security Diplomacy
https://www.garlic.com/~lynn/2019d.html#7 You paid taxes. These corporations didn't
https://www.garlic.com/~lynn/2019b.html#56 U.S. Has Spent Six Trillion Dollars on Wars That Killed Half a Million People Since 9/11, Report Says
https://www.garlic.com/~lynn/2019.html#45 Jeffrey Skilling, Former Enron Chief, Released After 12 Years in Prison
https://www.garlic.com/~lynn/2019.html#42 Army Special Operations Forces Unconventional Warfare
https://www.garlic.com/~lynn/2018b.html#65 Doubts about the HR departments that require knowledge of technology that does not exist
https://www.garlic.com/~lynn/2016c.html#93 Qbasic
https://www.garlic.com/~lynn/2015g.html#13 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015g.html#12 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015d.html#54 The Jeb Bush Adviser Who Should Scare You
https://www.garlic.com/~lynn/2015.html#72 George W. Bush: Still the worst; A new study ranks Bush near the very bottom in history
https://www.garlic.com/~lynn/2014d.html#89 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014d.html#11 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014d.html#4 Royal Pardon For Turing
https://www.garlic.com/~lynn/2014c.html#103 Royal Pardon For Turing
https://www.garlic.com/~lynn/2013j.html#30 What Makes a Tax System Bizarre?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: For Big Companies, Felony Convictions Are a Mere Footnote Date: 31 Jul, 2024 Blog: Facebookre:
The Madoff congressional hearings had the person that had tried (unsuccessfully) for a decade to get SEC to do something about Madoff (SEC's hands were finally forced when Madoff turned himself in, story is that he had defrauded some unsavory characters and Madoff was looking for gov. protection). In any case, part of the hearing testimony was that informants turn up 13 times more fraud than audits (while SEC had a 1-800 number to complain about audits, it didn't have a 1-800 "tip" line)
Madoff posts
https://www.garlic.com/~lynn/submisc.html#madoff
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes-oxley
whistleblower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Private Equity Giants Invest More Than $200M in Federal Races to Protect Their Lucrative Tax Loophole Date: 02 Aug, 2024 Blog: FacebookPrivate Equity Giants Invest More Than $200M in Federal Races to Protect Their Lucrative Tax Loophole
... trivia, the industry had gotten such a bad reputation during the "S&L Crisis" that they changed the name to private equity and "junk bonds" became "high-yield bonds". There was business TV news show where the interviewer repeatedly said "junk bonds" and the person being interviewed kept saying "high-yield bonds"
private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 360 1052-7 Operator's Console Date: 02 Aug, 2024 Blog: FacebookI took two credit hr intro to fortran/computers and end of semester was hired to rewrite 1401 MPIO for 360/30. Univ was getting 360/67 (for tss/360) to replace 709/1401 and temporarily got 360/30 (replacing 1401) pending 360/67. The univ. shutdown datacenter over the weekend and I had the whole place dedicated, although 48hrs w/o sleep made monday classes hard. They gave me a bunch of hardware and software manuals and I got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc and within a few weeks had 2000 card assembler program. Within year of taking intro class, the 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production) and I continued to have my 48hr weekend window. One weekend I had been at it for some 30+hrs and the 1052-7 console (same as 360/30) stopped typing and machine would just ring the bell. I spent 30-40mins trying everything I could think off before I hit the 1052-7 with my fist and the paper drop to the floor. It turns out that the end of the (fan-fold) paper had passed the paper sensing finger (resulting in 1052-7 unit check with intervention required) but there was enough friction to keep the paper in position and not apparent (until console jostled with my fist).
archived posts mentioning 1052-7 and end of paper
https://www.garlic.com/~lynn/2023f.html#108 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2022d.html#27 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022.html#5 360 IPL
https://www.garlic.com/~lynn/2017h.html#5 IBM System/360
https://www.garlic.com/~lynn/2017.html#38 Paper tape (was Re: Hidden Figures)
https://www.garlic.com/~lynn/2010n.html#43 Paper tape
https://www.garlic.com/~lynn/2006n.html#1 The System/360 Model 20 Wasn't As Bad As All That
https://www.garlic.com/~lynn/2006k.html#27 PDP-1
https://www.garlic.com/~lynn/2006f.html#23 Old PCs--environmental hazard
https://www.garlic.com/~lynn/2005c.html#12 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2002j.html#16 Ever inflicted revenge on hardware ?
https://www.garlic.com/~lynn/2001.html#3 First video terminal?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 360 1052-7 Operator's Console Date: 02 Aug, 2024 Blog: Facebookre:
... other trivia: student fortran jobs ran less than second on 709 (tape->tape) .... initially with os/360 on 360/67 ran over a minute. I install HASP and cuts time in half. I then start redoing stage2 sysgen to place datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs; never got better than 709 until I install univ. waterloo watfor.
before I graduate was hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I thot Renton datacenter largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the room and install 360/67 for me to play with when I'm not doing other stuff).
recent posts mentioning Watfor and Boeing CFO/Renton
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 50 years ago, CP/M started the microcomputer revolution Date: 03 Aug, 2024 Blog: Facebook50 years ago, CP/M started the microcomputer revolution
some of the MIT CTSS/7094
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to the 5th flr for Multics
https://en.wikipedia.org/wiki/Multics
Others went to the IBM science center on the 4th flr and did virtual machines. https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
They originally wanted 360/50 to do hardware mods to add virtual
memory, but all the extra 360/50s were going to the FAA ATC program,
and so had to settle for a 360/40 ... doing CP40/CMS (control
program/40, cambridge monitor system)
https://en.wikipedia.org/wiki/IBM_CP-40
CMS would run on 360/40 real machine (pending CP40 virtual machine being operational). CMS started with single letter for filesystem ("P", "S", etc) which were mapped mapped to "symbolic name" that started out mapped to physical (360/40) 2311 disk, then later to minidisks
Then when 360/67 standard with virtual memory became available,
CP40/CMS morphs into CP67/CMS (later for VM370/CMS, virtual machine
370 and conversational monitor system). CMS Program Logic Manual
(CP67/CMS Version 3.1)
https://bitsavers.org/pdf/ibm/360/cp67/GY20-0591-1_CMS_PLM_Oct1971.pdf
(going back to CMS implementation for real 360/40) the system API
convention: pg4:
Symbolic Name: CON1, DSK1, DSK2, DSK3, DSK4, DSK5, DSK6, PRN1, RDR1,
PCH1, TAP1, TAP2
... snip ...
trivia: I took two credit hr intro to fortran/computers class and at the end of the semester, hired to rewrite 1401 MPIO for 360/30. Univ getting 360/67 for tss/360. to replace 709/1401 and temporary got 360/30 (that had 1401 microcode emulation) to replace 1401, pending arrival of 360/67 (Univ shutdown datacenter on weekend, and I would have it dedicated, although 48hrs w/o sleep made Monday classes hard). I was given a bunch of hardware & software manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc ... and within a few weeks had a 2000 card assembler program
Within a year of taking intro class, 360/67 showed up and I was hired fulltime responsibility for OS/360 (tss/360 didn't come to production, so ran as 360/65 with os/360) ... and I continued to have my dedicated weekend time. Student fortran ran under second on 709 (tape to tape), but initial over a minute on 360/65. I install HASP and it cuts time in half. I then start revamping stage2 sysgen to place datasets and PDS members to optimize disk seek and multi-track searches, cutting another 2/3rds to 12.9secs; never got better than 709 until I install univ of waterloo WATFOR. My 1st SYSGEN was R9.5MFT, then started redoing stage2 sysgen for R11MFT. MVT shows up with R12 but I didn't do MVT gen until R15/16 (15/16 disk format shows up being able to specify VTOC cyl ... aka place other than cyl0 to reduce avg. arm seek).
along the way, CSC comes out to install CP67/CMS (3rd after CSC itself and MIT Lincoln Labs, precursor to VM370), which I mostly got to play with during my weekend dedicated time. First few months I mostly spent rewriting CP67 pathlengths for running os/360 in virtual machine; test os/360 stream 322secs, initially ran 856secs virtually; CP67 CPU 534secs got down to CP67 CPU 113secs. CP67 had 1052 & 2741 support with automatic terminal type identification (controller SAD CCW to switch port scanner terminal type). Univ had some TTY terminals so I added TTY support integrated with automatic terminal type.
before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was CP/M
https://en.wikipedia.org/wiki/CP/M
before developing CP/M, Kildall worked on IBM CP67/CMS at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
aka: CP/M ... control program/microcomputer
Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates,
CEO of the then-small software firm Microsoft, to discuss the
possibility of using Microsoft PC-DOS OS for IBM's
about-to-be-released PC. Opel set up the meeting at the request of
Gates' mother, Mary Maxwell Gates. The two had both served on the
National United Way's executive committee.
... snip ...
other trivia: Boca claimed that they weren't doing any software for ACORN (code name for IBM/PC) and so a small IBM group in Silicon Valley formed to do ACORN software (many who had been involved with CP/67-CMS and/or its follow-on VM/370-CMS) ... and every few weeks, there was contact with Boca that decision hadn't changed. Then at some point, Boca changed its mind and silicon valley group was told that if they wanted to do ACORN software, they would have to move to Boca (only one person accepted the offer, didn't last long and returned to silicon valley). Then there was joke that Boca didn't want any internal company competition and it was better to deal with external organization via contract than what went on with internal IBM politics.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
some recent posts mentioning early work on CP67 pathlengths for
running os/360
https://www.garlic.com/~lynn/2024e.html#3 TYMSHARE Dialup
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#99 Interdata Clone IBM Telecommunication Controller
https://www.garlic.com/~lynn/2024d.html#90 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#36 This New Internet Thing, Chapter 8
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024b.html#114 EBCDIC
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2024.html#94 MVS SRM
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downfall and Make-over Date: 03 Aug, 2024 Blog: Facebookre: SNA/TCPIP; 80s, the communication group was fighting off client/server and distributed computing, trying to preserve their dumb terminal paradigm ... late 80s, a senior disk engineer got a talk scheduled at an annual, world-wide, internal, communication group conference, supposedly on 3174 performance ... but opens the talk with statement that the communication group was going to be responsible for the demise of the disk division. The disk division was seeing a drop in disk sales with data fleeing datacenters to more distributed computing friendly platforms. They had come up with a number of solutions but were constantly vetoed by the communication group (with their corporate strategic ownership of everything that crossed datacenters walls) ... communication group datacenter stranglehold wasn't just disks and a couple years later IBM has one of the largest losses in the history of US companies.
As partial work-around, senior disk division executive was investing in distributed computing startups that would use IBM disks ... and would periodically ask us to drop by his investments to see if we could provide any help.
Learson was CEO and tried (and failed) to block the bureaucrats,
careerists and MBAs from destroying Watson culture and legacy ... then
20yrs later IBM (w/one of the largest losses in the history of US
companies) was being reorged into the 13 "baby blues" in preparation
for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).
some more Learson detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
communication group trying to preserve dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 50 years ago, CP/M started the microcomputer revolution Date: 04 Aug, 2024 Blog: Facebookre:
some personal computing history
https://arstechnica.com/features/2005/12/total-share/
https://arstechnica.com/features/2005/12/total-share/2/
https://arstechnica.com/features/2005/12/total-share/3/
https://arstechnica.com/features/2005/12/total-share/4/
https://arstechnica.com/features/2005/12/total-share/5
https://arstechnica.com/features/2005/12/total-share/6/
https://arstechnica.com/features/2005/12/total-share/7/
https://arstechnica.com/features/2005/12/total-share/8/
https://arstechnica.com/features/2005/12/total-share/9/
https://arstechnica.com/features/2005/12/total-share/10/
old archived post with decade of vax sales (including microvax),
sliced and diced by year, model, us/non-us
https://www.garlic.com/~lynn/2002f.html#0
IBM 4300s sold into the same mid-range market as VAX and in about the same numbers (excluding microvax) in the small/single unit orders, big difference large corporations with orders for hundreds of vm/4300s for placing out in departmental areas ... sort of the leading edge of coming the distributed computing tsunami.
other trivia: In jan1979, I was con'ed into doing (old CDC6600 fortran) benchmark on early engineering 4341 for national lab that was looking at getting 70 for compute farm, sort of the leading edge of the coming cluster supercomputing tsunami. A small vm/4341 cluster was much less expensive than a 3033, higher throughput, smaller footprint, less power&cooling, folklore that POK felt so threatened that they got corporate to cut Endicoot allocation of critical 4341 manufacturing component in half.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downfall and Make-over Date: 04 Aug, 2024 Blog: Facebookre:
other background, AMEX and KKR were in competition for (private
equity) take-over off RJR and KKR wins. KKR then runs into trouble
with RJR and hires away the AMEX president to help.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
Then IBM board hires former AMEX president to help with IBM make-over
... who uses some of the same tactics used at RJR (ref gone 404, but
lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
The former AMEX president then leaves IBM to head up another major
private-equity company
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/
"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a
Washington-based global private equity firm whose 2006 revenues of $87
billion were just a few billion below ibm's. Carlyle has boasted
George H.W. Bush, George W. Bush, and former Secretary of State James
Baker III on its employee roster."
... snip ...
... around turn of the century, private-equity were buying up beltway bandits and gov. contractors, hiring prominent politicians to lobby congress to outsource gov. to their companies, side-stepping laws blocking companies from using money from gov. contracts to lobby congress. the bought companies also were having their funding cut to the bone, maximizing revenue for the private equity owners; one poster child was company doing outsourced high security clearances but were found to just doing paper work, but not actually doing background checks.
former amex president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
pension plan posts
https://www.garlic.com/~lynn/submisc.html#pension
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
success of failure posts
https://www.garlic.com/~lynn/submisc.html#success.of.failure
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Downfall and Make-over Date: 04 Aug, 2024 Blog: Facebookre:
23jun1969 unbundling announcement, starting to charge for (application) software (made the case that kernel software should still be free), SE services, hardware maint.
SE training had included trainee type operation, part of large group at customer ship ... however, they couldn't figure out how *NOT* to charge for trainee SEs at customer location ... thus was born HONE, branch-office online access to CP67 datacenters, practicing with guest operating systems in virtual machines. The science center had also ported APL\360 to CMS for CMS\APL (fixes for large demand page virtual memory workspaces and supporting system APIs for things like file I/O, enabling real-world applications). HONE then started offering CMS\APL-based sales&marketing support applications, which came to dominate all HONE activity (with guest operating system use dwindling away). One of my hobbies after joining IBM was enhanced operating systems for internal datacenters and HONE was long-time customer.
Early 70s, IBM had the Future System effort, totally different from
360/370 and was going to completely replace 370; dearth of new 370
during FS is credited with giving the clone 370 makers (including
Amdahl) their market foothold (all during FS, I continued to work on
360/370 even periodically ridiculing what FS was doing, even drawing
analogy with long running cult film down at Central Sq; wasn't exactly
career enhancing activity). When FS implodes, there is mad rush to get
stuff back into the 370 product pipelines, including kicking off
quick&dirty 3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
FS (failing) significantly accelerated the rise of the bureaucrats,
careerists, and MBAs .... From Ferguson & Morris, "Computer Wars: The
Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
repeat, CEO Learson had tried (and failed) to block bureaucrats,
careerists, MBAs from destroying Watson culture & legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
20yrs later, IBM has one of the largest losses in the history of US
companies
In the wake of the FS implosion, the decision was changed to start charging for kernel software and some of my stuff (for internal datacenters) was selected to be initial guinea pig and I had to spend some amount of time with lawyers and business people on kernel software charging practices.
Application software practice was to forecast customer market at high, medium and low price (the forecasted customer revenue had to cover original development along with ongoing support, maintenance and new development). It was a great culture shock for much of IBM software development ... one solution was combining software packages, enormously bloated projects with extremely efficient projects (for "combined" forecast, efficient projects underwriting the extremely bloated efforts).
Trivia: after FS implodes, the head of POK also managed to convince corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission, but had to recreate a development group from scratch). I was also con'ed into helping with a 16-processor tightly-coupled, multiprocessor effort and we got the 3033 processor engineers into working on it in their spare time (lot more interesting that remapping 168 logic to 20% faster chips). Everybody thought was great until somebody told the head of POK it could be decades before POK favorite son operating system (MVS) had effective 16-way support (i.e. IBM documentation at the time was MVS 2-processor only had 1.2-1.5 times the throughput of single processor). Head of POK then invited some of us to never visit POK again and told the 3033 processor engineers, heads down and no distractions (note: POK doesn't ship 16 processor system until after the turn of the century).
Other trivia: Amdahl had won the battle to make ACS, 360 compatible
... folklore then was IBM executives killed ACS/360 because it would
advance the state of the art too fast and IBM would loose control of
the market. Amdahl then leaves IBM. Following has some ACS/360
features that don't show up until ES/9000 in the 90s:
https://people.computing.clemson.edu/~mark/acs_end.html
... note there were comments that if any other computer company had dumped so much money into such an enormous failed (Future System) project, they would have never survived (it took IBM another 20yrs before it was about to become extinct)
one of the last nails in the Future System coffin was analysis by the IBM Houston Science Center that if apps were moved from 370/195 to FS machine made out of the fastest available hardware technology, they would have throughput of 370/145 ... about 30 times slowdown.
23jun1969 IBM unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
smp, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: HONE, APL, IBM 5100 Date: 06 Aug, 2024 Blog: Facebook23jun1969 unbundling announcement starts charging for (application) software, SE services, maint. etc. SE training used to include part of large group at customer site, but couldn't figure out how not to charge for trainee SE time ... so was born "HONE", online branch office access to CP67 datacenters, practicing with guest operating systems in virtual machines. IBM Cambridge Science Center also did port of APL\360 to CP67/CMS for CMS\APL (lots of fixes for workspaces in large demand page virtual memory and APIs for system services like file I/O, enabling lots of real world apps) and HONE started offering CMS\APL-based sales&marketing support applications ... which comes to dominate all HONE use.
HONE transitions from CP67/CMS to VM370/CMS (and VM370 APL\CMS done at
Palo Alto Science Center) and clone HONE installations start popping
up all over the world (HONE by far largest use of APL). PASC also does
the 370/145 APL microcode assist (claims runs APL as fast as on
370/168) and prototypes for what becomes 5100
https://en.wikipedia.org/wiki/IBM_5110
https://en.wikipedia.org/wiki/IBM_PALM_processor
The US HONE datacenters are also consolidated in Palo Alto (across the back parking lot from PASC, trivia when FACEBOOK 1st moves into Silicon Valley, it is new bldg built next door to the former US consolidated HONE datacenter). US HONE systems are enhanced with single-system image, loosely-coupled, shared DASD with load balancing and fall-over support (at least as large as any airline ACP/TPF installation) and then add 2nd processor to each system (16 processors aggregate) ... ACP/TPF not getting two-processor support for another decade. PASC helps (HONE) with lots of APL\CMS tweaks.
trivia: When I 1st joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters, and HONE was long-time customer. One of my 1st IBM overseas trips was for HONE EMEA install in Paris (La Defense, "Tour Franklin?" brand new bldg, still brown dirt, not yet landscaped)
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundling
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
(internal) CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor support posts
https://www.garlic.com/~lynn/subtopic.html#smp
posts mentioning APL, PASC, PALM, 5100/5110:
https://www.garlic.com/~lynn/2024b.html#15 IBM 5100
https://www.garlic.com/~lynn/2023e.html#53 VM370/CMS Shared Segments
https://www.garlic.com/~lynn/2015c.html#44 John Titor was right? IBM 5100
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2005.html#44 John Titor was right? IBM 5100
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: TYMSHARE, ADVENTURE/games Date: 06 Aug, 2024 Blog: FacebookOne of the visits to TYMSHARE they demo'ed a game somebody had found on Stanford PDP10 and ported it to VM370/CMS, I got a copy and made executable available inside IBM ... and would send the source to anybody that got all points .... within a sort period of time, new versions with more points appeared as well as port to PLI.
We had argument with corporate auditors that directed all games had to be removed from the system. At the time most company 3270 logon screens included "For Business Purposes Only" ... our 3270 logon screens said "For Management Approved Uses" (and claimed they were human factors demo programs).
commercial virtual machine online service posts
https://www.garlic.com/~lynn/submain.html#online
some recent posts mentioning tymshare and adventure/games
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 360/50 and CP-40 Date: 06 Aug, 2024 Blog: FacebookIBM Cambridge Science Center had a similar problem, wanted to have a 360/50 to modify for virtual memory, but all the extra 360/50s were going to FAA ATC ... and so they had to settle for 360/40 ... they implemented virtual memory with associative array that held process-ID and virtual page number for each real page (compared to Atlas associative array, which just had virtual page number for each real page... effectively just single large virtual address space).
the official IBM operating system for (standard virtual memory) 360/67 was TSS/360 which peaked around 1200 people at a time when the science center had 12 people (that included secretary) morphing CP/40 into CP/67.
Melinda's history website
https://www.leeandmelindavarian.com/Melinda#VMHist
trivia: FE had a bootstrap diagnostic process that started with "scoping" components. With 3081 TCMs ... it was no longer possible to scope ... so a system was written for UC processor (communication group used for 37xx and other boxes) that implemented "service processor" with probes into TCMs for diagnostic purposes (and a scope could be used to diagnose the "service processor", bring it up and then used to diagnose the 3081).
Moving to 3090, they decided on using 4331 running a highly modified
version of VM370 Release 6 with all the screens implemented in CMS
IOS3270. This was then upgraded to a pair of 4361s for service
processor. Your can sort of see this in the 3092 requiring a pair of
3370s FBA (one for each 4361) ... even for MVS systems that never had
FBA support
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
trivia: Early in rex (before renamed rexx and released to customers),
I wanted to show it wsn't just another pretty scripting language. I
decided to spend half time over three months reWriting large
assembler (large dump reader and diagnostic) application with ten
times the function and running ten times faster (some slight of hand
to make interpreted rex run faster than asm). I finished early so
wrote a library of automated routines that searched for common failure
signatures. For some reason it was never released to customers (even
though it was in use by nearly every internal datacenter and PSR)
... I did eventually get permission to give user group presentations
on how I did the implementation ... and eventually similar
implementations started to appear. Then the 3092 group asked if they
could ship it with the 3090 service processor; some old archive email
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx
recent posts mentioning CP40
https://www.garlic.com/~lynn/2024e.html#14 50 years ago, CP/M started the microcomputer revolution
https://www.garlic.com/~lynn/2024d.html#111 GNOME bans Manjaro Core Team Member for uttering "Lunduke"
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#102 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024c.html#88 Virtual Machines
https://www.garlic.com/~lynn/2024c.html#65 More CPS
https://www.garlic.com/~lynn/2024c.html#18 CP40/CMS
https://www.garlic.com/~lynn/2024b.html#5 Vintage REXX
https://www.garlic.com/~lynn/2024.html#28 IBM Disks and Drums
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#35 Vintage TSS/360
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#108 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023d.html#87 545tech sq, 3rd, 4th, & 5th flrs
https://www.garlic.com/~lynn/2023d.html#73 Some Virtual Machine History
https://www.garlic.com/~lynn/2023c.html#105 IBM 360/40 and CP/40
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022d.html#58 IBM 360/50 Simulation From Its Microcode
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Disk Capacity and Channel Performance Date: 07 Aug, 2024 Blog: FacebookOriginal 3380 had 20 track spacings between each data track. That was then cut in half giving twice the tracks(& cylinders) for double the capacity, then spacing cut again for triple the capacity.
About then the father of 801/risc gets me to try and help him with a disk "wide-head" ... transferring data in parallel with 16 closely placed data tracks ... following servo tracks on each side (18 tracks total). One of the problems was mainframe channels were still 3mbytes/sec and this required 50mbytes/sec. Then in 1988, the branch office asks me to help LLNL (national lab) get some serial stuff they were working with standardized, which quickly becomes fibre-channel standard ("FCS", including some stuff I did in 1980 ... initially 1gbit/sec full-duplex, 200mbytes/sec aggregate). Later POK announces some serial stuff that had been working on since at least 1980, with ES/9000 as ESCON (when it was already obsolete, around 17mbytes/sec).
Then some POK engineers become involved with FCS and define a protocol that radically reduces the throughput, eventually announced as FICON. Latest public benchmark I've found is z196 "Peak I/O" that gets 2M IOPS over 104 FICON. About the same time a FCS was announced for E5-2600 server blades claiming over million IOPS (two such FCS having higher throughput than 104 FICON). Note IBM documentation also recommended that SAPs (system assist processors that handle the actual I/O) be kept to 70% CPU ... or around 1.5M IOPS. Somewhat complicating matters is CKD DASD haven't been made for decades, all being simulated using industry standard fixed-block disks.
re:
https://www.ibm.com/support/pages/system/files/inline-files/IBM%20z16%20FEx32S%20Performance_3.pdf
aka zHPF & TCW is closer to native FCS operation starting in 1988 (and what I had done in 1980) ... trivia the hardware vendor tried to get IBM to release my support in 1980 ... but the group in POK playing with fiber were afraid it would make it harder to get their stuff released (eventually a decade later as ESCON) ... and get it vetoed. Old archived (bit.listserv.ibm-main) post from 2012 discussing zHPF&TCW is closer to original 1988 FCS specification (and what I had done in 1980). Also mentions throughput being throttled by SAP processing https://www.garlic.com/~lynn/2012m.html#4
while working with FCS was also doing IBM HA/CMP product ... Nick Dinofrio had approved HA/6000 project, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Oracle, Sybase, Informix, Ingres) that had VAXcluster support in same source base with UNIX (I do a distributed lock manager with VAXCluster API semantics to ease the transition).
We were also using Hursley's 9333 in some configurations and I was
hoping to make it interoperable with FCS. Early jan1992, in meeting
with Oracle CEO, AWD/Hester tells Ellison that we would have
16-system clusters by mid92 and 128-system clusters by
ye92. Then late jan1992, cluster scale-up was transferred for IBM
Supercomputer (technical/scientific *ONLY*) and we were told we
couldn't work with anything having more than four processors. We leave
IBM a few months later. Then later find that 9333 evolves into SSA
instead:
https://en.wikipedia.org/wiki/Serial_Storage_Architecture
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts specifically mentioning 9333, SSA, FCS, FICON
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#58 Vintage IBM 5100
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#97 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2022e.html#47 Best dumb terminal for serial connections
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2021k.html#127 SSA
https://www.garlic.com/~lynn/2021g.html#1 IBM ESCON Experience
https://www.garlic.com/~lynn/2019b.html#60 S/360
https://www.garlic.com/~lynn/2019b.html#57 HA/CMP, HA/6000, Harrier/9333, STK Iceberg & Adstar Seastar
https://www.garlic.com/~lynn/2016h.html#95 Retrieving data from old hard drives?
https://www.garlic.com/~lynn/2013m.html#99 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013m.html#96 SHARE Blog: News Flash: The Mainframe (Still) Isn't Dead
https://www.garlic.com/~lynn/2013i.html#50 The Subroutine Call
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012m.html#2 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012k.html#77 ESCON
https://www.garlic.com/~lynn/2012k.html#69 ESCON
https://www.garlic.com/~lynn/2012j.html#13 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2011p.html#40 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#39 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011e.html#31 "Social Security Confronts IT Obsolescence"
https://www.garlic.com/~lynn/2010i.html#61 IBM to announce new MF's this year
https://www.garlic.com/~lynn/2010h.html#63 25 reasons why hardware is still hot at IBM
https://www.garlic.com/~lynn/2010f.html#13 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2010f.html#7 What was the historical price of a P/390?
https://www.garlic.com/~lynn/2009q.html#32 Mainframe running 1,500 Linux servers?
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2006p.html#46 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2002j.html#15 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/95.html#13 SSA
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: After private equity takes over hospitals, they are less able to care for patients Date: 08 Aug, 2024 Blog: FacebookAfter private equity takes over hospitals, they are less able to care for patients, top medical researchers say. A study by physicians in the Journal of the American Medical Association describes a pattern of selling land, equipment and other resources after private equity acquires hospitals.
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
capitalism posts
https://www.garlic.com/~lynn/submisc.html#caitalism
KKR Founders Sued for Allegedly Getting Giant Payday for No
Work. Lawsuit adds to legal scrutiny of arcane tax deals benefiting
private-equity heavyweights
https://archive.ph/kqQvI
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Public Facebook Mainframe Group Date: 08 Aug, 2024 Blog: FacebookNew to the group
I had taken 2credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO for 360/30 (univ. was getting 360/67 for tss/360 replacing 709/1401, temporary got 360/30 replacing 1401 pending 360/67). Within year of taking intro class 360/67 arrives and I was hired fulltime responsible for os/360 (tss/360 not production so ran as 360/65). Before I graduate, I'm hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit). I thot Renton datacenter largest in the world, couple hundred million in 360s, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room. Lots of politics between Renton director and CFO, who only had a 360/30 up at Boeing field for payroll (although they enlarge the machine room and install 360/67 for me to play with when I'm not doing other stuff). When I graduate, I join IBM science center (instead of staying with Boeing CFO). One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters, including the online branch office sales&marketing support HONE systems were long-time customers.
A little over decade ago, an IBM customer asks me to track down the IBM decision to add virtual memory to all 370s and I find a staff member to executive making the decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used. As a result a typical 1mbyte 370/165 only ran four concurrent regions, insufficient to keep system busy and justified. Going to 16mbyte virtual memory (aka VS2/SVS) allowed the number of regions to be increased by factor of four times (capped 1t 15 by 4bit storage protect key) with little or no paging ... sort of like running MVT in a CP67 16mbyte virtual machine (Ludlow was doing initial MVT->SVS on 360/67 ... simple build of 16mbyte tables, simple paging code (little need for optimization with little or no paging). Biggest effort was EXCP/SVC0 needed to make copy of channel programs, substituting real addresses for virtual ... and he borrows the CP67 CCWTRANS code for the implementation.
I had done dynamic adaptive resource manager ("wheeler" scheduler) for CP67 at the univ, as undergraduate in the 60s. I started pontificating that 360s had made trade-off between abundant I/O resources and limited real storage and processor ... but by the mid-70s the trade-off had started to invert. In the early 80s, I wrote that between 360 announce and then, the relative system throughput of DASD had declined by order of magnitude (disk got 3-5 times faster and systems got 40-50 times faster). Some disk division executive took exception and directed the division performance group to refute my claims. After a few weeks they came back and essentially said that I had slightly understated the problem. They then respun the analysis for (user group) SHARE presentation configuring DASD for improved throughput (16Aug1984, SHARE 63, B874). More recently there has been observation that current memory access latency when measured in count of processor cycles is similar to 60s DASD access latency when measured in count of 60s processor cycles (memory is the new DASD).
In the 70s, as systems got bigger and mismatch between DASD throughput and system throughput increased, increased concurrent process execution was required and it was necessary to transition to separate virtual address space for each "region" ... aka "MVS" (to get around the 4bit storage key capping number at 15). This created a different problem, OS360 was heavily pointer passing oriented ... so they mapped an 8mbyte "image" of the MVS kernel into every (16mbyte) virtual address space, leaving 8mbyte. Then because subsystems were moved into separate address spaces, a 1mbyte common segment area ("CSA") was mapped into every virtual address space for passing stuff back&forth with subsystems (leaving 7mbyte). However, CSA requirements were somewhat proportional to number of concurrently running "address spaces" and number of subsystems and "CSA" quickly becomes "common system area" (by 3033 it was running 5-6mbytes and threatening to become 8mbyte, leaving zero for applications). Much of 370/XA was specifically for addressing MVS shortcomings (head of POK had already convinced corporate to kill the VM370 project, shutdown the development group and transfer all the people to POK to work on MVS/XA).
trivia: Boeing Huntsville had gotten a two-processor 360/67 system with several 2250s for TSS/360 ... but configured them as two 360/65s with MVT for 2250 CAD/CAM applications. They had already run into the MVT storage management problems and had modified MVTR13 to run in virtual memory mode (no paging but was able to leverage as partial countermeasure to MVT storage management, precursor to decision to add virtual memory to all 370s).
I had also been blamed for online computer conferencing (late 70s/early 80s) on the IBM internal network (larger than ARPANET/Internet from just about the beginning until sometime mid/late 80s), which really took off the spring of 1981 when I distributed a trip report visiting Jim Gray at Tandem (he had left IBM SJR the fall before palming some number of things on me), only about 300 directly participated but claims that 25,000 were reading; folklore is that when corporate executive committee was told, 5of6 wanted to fire me (possibly mitigating, lots of internal datacenters were running my enhanced production operating systems). One of the outcomes was a researcher was hired to study how I communicated, they sat in the back of my office for nine months taking notes on face-to-face, telephone, got copies of all incoming/outgoing email and logs of all instant messages; result were IBM reports, conference papers, books, and Stanford PHD (joint with language and computer AI).
Abut the same time, I was introduced to John Boyd and would sponsor his briefings at IBM. One of his stories was about being very vocal that the electronics across the trail wouldn't work and (possibly as punishment) was then put in command of "spook base". One of Boyd biographies claims that "spook base" was $2.5B "wind fall" for IBM (ten times Boeing Renton), would have helped to cover the cost of the disastrous/failed early 70s IBM FS project.
89/90, the Commandant of the Marine Corps leverages Boyd for a Corps make-over ... at a time when IBM was in desperate need of a make-over. There has continued to be Military Strategy "Boyd Conferences" at Quantico "MCU" for us after Boyd passed in 1997 (although former Commandant and I were frequently only ones present that personally knew Boyd).
SHARE Original/Founding Knights of VM
http://mvmua.org/knights.html
IBM Mainframe Hall of Frame
https://www.enterprisesystemsmedia.com/mainframehalloffame
IBM System Mag article (some history details slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
from when IBM Jargon was young and "Tandem Memos" was new
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
IBM science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Original SQL/Relational posts
https://www.garlic.com/~lynn/submain.html#systemr
Online computer conferencing ("Tandem Memo") posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
John Boyd posts and web refs
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VMNETMAP Date: 08 Aug, 2024 Blog: FacebookI had paper copy of 700 some ... and scanned it ... but lots of non-electric stuff have gone in several down sizes over the years. Image of desk ornament for the 1000 nodes.
Archived post with some of the weekly distribution files passing 1000
nodes in 1983 (also generated list of all company locations that added
one or more nodes during 1983):
https://www.garlic.com/~lynn/2006k.html#8
part of map 1977 scan
coworker at the science center was responsible for the cambridge
wide-area network that morphs into the corporate network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s) and technology also used for the corporate sponsored univ BITNET
(also for a time larger than arpanet/internet)
https://en.wikipedia.org/wiki/BITNET
three people had invented GML at the science center in 1969 and GML
tag processing added to CMS SCRIPT, One of the GML inventors was
originally hired to promote the science center wide-area network:
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
nearly all cp67 vnet/rscs ... and then vm370 vnet/rscs. NJI/NJE simulation drivers were done for vnet/rscs so could connect HASP (& then MVS/JES systems) but had to be very careful. The code had originated in HASP ("TUCC" in cols 68-71) that used unused slots in the 255 entry pseudo device table (usually around 160-180) and would trash traffic where destination or origin nodes weren't in local table ... and the network had quickly passed 250 ... JES did get fix to handle 999 nodes .... but it was after corporate network was well past 1000 nodes.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
posts mentioning HASP/ASP, JES2/JES, and/or NJE/NJI:
https://www.garlic.com/~lynn/submain.html#hasp
BITNET (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
Edson (responsible for rscs/vnet)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
Other problems with JES: header intermixed job control and network fields ... and exchanging traffic between MVS/JES systems at different release levels would crash MVS. Eventually RSCS/VNET NJE/NJI drivers were updated to recognize format required by directly connected JES systems and if traffic originated from JES system at different release level and reformat the fields to keep MVS from crashing. As a result tended to keep MVS/JES systems hidden behind a VM370 RSCS/VNET system. There is infamous case where MVS in Hursley were crashing because San Jose had changed JES and Hursley VM370 RSCS/VNET group had gotten the corresponding fixes.
Another problem was VTAM/JES had link time-out. STL (since renamed SVL) was setting up a double-hop satellite link (up/down west/east coast and up/down east coast/England) with Hursley (to use each other systems offshift). They hooked it up and everything work fine. Then a (MVS/JES biased) executive directed it to be between two JES systems and nothing worked. They then switched back to RSCS/VNET and it worked fine. The executive then claimed that RSCS/VNET was too dumb to know it wasn't working. It turns out VTAM/JES had hard-coded round-trip limit and double-hop satellite link round-trip time exceeded the limit.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VMNETMAP Date: 08 Aug, 2024 Blog: Facebookre:
3278 drift; 3277/3272 had hardware response of .086sec ... it was followed with 3278 that moved a lot of electronics back into 3274 (reducing 3278 manufacturing cost) ... drastically increasing protocol chatter and latency, increasing hardware response to .3-.5sec (depending on amount of data). At the time there were studies showing quarter sec response improved productivity. Some number of internal VM datacenters were claiming quarter second system response ... but you needed at least .164sec system response with 3277 terminal to get quarter sec response for the person (I was shipping enhanced production operating system getting .11sec system response). A complaint written to the 3278 Product Administrator got back a response that 3278 wasn't for interactive computing but for "data entry" (aka electronic keypunch). The MVS/TSO crowd never even noticed, it was a really rare TSO operation that even saw 1sec system response. Later IBM/PC 3277 hardware emulation card would get 4-5 times upload/download throughput of 3278 card.
some posts mentioning 3277/3272 & 3278/3274 timings
https://www.garlic.com/~lynn/2024b.html#31 HONE, Performance Predictor, and Configurators
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2019c.html#4 3270 48th Birthday
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2012d.html#19 Writing article on telework/telecommuting
I took two credit hr intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO for 360/30 ... univ shutdown datacenter for weekend and I had it dedicated although 48hrs w/o sleep made monday classes difficult. Within year of taking intro class the 709 was replace with 360/67 (originally intended for tss/360 but ran as 360/65) and I was hired fulltime responsible for OS/360.
along the way univ. library got ONR grant to do online catalog (some of the money went for 2321 datacell). Project was also selected as beta test for the original CICS program product ... and CICS support was added to my tasks
posts mentioning CICS &/or BDAM
https://www.garlic.com/~lynn/submain.html#cics
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VMNETMAP Date: 08 Aug, 2024 Blog: Facebookre:
parasite/story drift: small, compact cms apps ... 3270 terminal
emulation and hllapi like facility (predating ibm/pc) ... could login
local machine running script and/or dial to PVM (aka passthrough) and
login to remote machine and run commands. overview and examples
https://www.garlic.com/~lynn/2001k.html#35
story to retrieve RETAIN info
https://www.garlic.com/~lynn/2001k.html#36
author had also done VMSG cms email app, very early source version was also picked up by PROFS group for their email client. when he tried to offer them a much enhanced version, they tried to get him fired. He then demonstrated that every PROFS email carried his initials in non-displayed field. After that everything quieted down and he would only share his source with me and one other person.
other posts mentioning Parasite/Story and VMSG
https://www.garlic.com/~lynn/2023g.html#49 REXX (DUMRX, 3092, VMSG, Parasite/Story)
https://www.garlic.com/~lynn/2023f.html#46 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023c.html#43 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing
https://www.garlic.com/~lynn/2023.html#62 IBM (FE) Retain
https://www.garlic.com/~lynn/2022b.html#2 Dataprocessing Career
https://www.garlic.com/~lynn/2021h.html#33 IBM/PC 12Aug1981
https://www.garlic.com/~lynn/2019d.html#108 IBM HONE
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2017k.html#27 little old mainframes, Re: Was it ever worth it?
https://www.garlic.com/~lynn/2017g.html#67 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2015d.html#12 HONE Shutdown
https://www.garlic.com/~lynn/2014k.html#39 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2014j.html#25 another question about TSO edit command
https://www.garlic.com/~lynn/2014h.html#71 The Tragedy of Rapid Evolution?
https://www.garlic.com/~lynn/2014e.html#49 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013d.html#66 Arthur C. Clarke Predicts the Internet, 1974
https://www.garlic.com/~lynn/2012d.html#17 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011o.html#30 Any candidates for best acronyms?
https://www.garlic.com/~lynn/2011m.html#44 CMS load module format
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2011b.html#67 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009q.html#66 spool file data
https://www.garlic.com/~lynn/2009q.html#4 Arpanet
https://www.garlic.com/~lynn/2009k.html#0 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2006n.html#23 sorting was: The System/360 Model 20 Wasn't As Bad As All That
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VMNETMAP Date: 08 Aug, 2024 Blog: Facebookre:
In early 80s, I got HSDT project (T1 and faster computer links, both
terrestrial and satellite) and one of my first satellite T1 links was
between the Los Gatos lab on the west coast and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston (on east coast) that had a bunch of floating point
system boxes (latest ones had 40mbyte/sec disk arrays).
https://en.wikipedia.org/wiki/Floating_Point_Systems
Was supporting both RSCS/VNET and TCP/IP and also having lots of
interferance with communication group who's VTAM boxes were capped at
56kbits/sec. Was also working with NSF director and was suppose to get
$20M to interconnect the NSF supercomputer centers; then congess cuts
the budget, some other things happened and eventually RFP was released
(in part based on what we already had running). From 28Mar1986
Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Communication group had been fabricating all sort of justifications why they weren't supporting links faster than 56kbits/sec. They eventually came out with 3737 to drive a (short-haul, terrestrial) T1 link ... with boatload of memory and Motorola 68k processors with simulation pretending to be a CTCA VTAM to local mainframe VTAM. Problem was VTAM "window pacing algorithm" had limit on outstanding packets and even a short-haul T1 would absorb the full packet limit before any replies began arriving (return ACKs would eventually drop below the limit allowing additional packets to transmit, but resulting only very small amount of T1 bandwidth would be used). The local 3737 simulated CTCA VTAM would immediately ACK packets, trying to keep host VTAM packets flowing ... and then use different protocol to actually transmit packets to the remote 3737 (however it was only good for about 2mbits, compared to US T1 full-duplex 3mbits and EU T1 full-duplex 4mbits).
By comparison, HSDT ran dynamic adaptive rate-based pacing (rather than window pacing) ... adjusting how fast (interval between) packets sent to other end. If no packets were being dropped and rate didn't have to be adjusted, then it would transmit as fast as the link could transmit.
Corporate also required all links (leaving IBM premise) had to be encrypted and I really hated would I had to pay for T1 link encryptors and faster encryptors were really hard to find. I had done some playing with software DES and found it ran about 150kbytes/sec on 3081 (aka both 3081 processors would be required to handle software encryption for T1 full-duplex link). I then got involved in doing link encryptor that could handle 3mbytes/sec (not mbits) and cost less than $100 to build. Initially the corporate DES group said it seriously weaken DES implementation. It took me 3months to figure out how to explain what was happening, but it was hallow victory .... they said that there was only one organization that was allowed to use such crypto ... we could make as many boxes as we wanted but they would all have to be sent to that organization. It was when I realized there was three kinds of crypto in the world 1) the kind they don't care about, 2) the kind you can't do, 3) the kind you can only do for them.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VMNETMAP Date: 09 Aug, 2024 Blog: Facebookre:
In the 80s, communication group was getting the native RSCS/VNET drivers restricted and just shipping simulated NJE/NJI drivers supporting SNA (and univ BITNET was converting to TCP/IP instead) ... for awhile the internal corporate VNET continued to use the native RSCS/VNET drivers because they had higher throughput.
Then communication group fabricated a story for the executive
committee that the internal RSCS/VNET had to all convert to SNA (or
otherwise PROFS would stop working). I had done a lot of work to get
RCSC/VNET drivers working at T1 speed and was scheduled to give
presentation at next corporate CJN backbone meeting. Then got email
that the communication group had got CJN meetings restricted to
managers only ... didn't want a lot of technical people confusing
decision makers with facts (as part of getting it converted to
SNA). Some old email in archived posts
https://www.garlic.com/~lynn/2006x.html#email870302
https://www.garlic.com/~lynn/2011.html#email870306
the communication group was also spreading internal misinformation
that SNA/VTAM could be used for NSFNET and somebody was collecting a
lot of that email and then forwarded it to us ... heavily clipped and
redacted (to protect the guilty)
https://www.garlic.com/~lynn/2006w.html#email870109
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Disk Capacity and Channel Performance Date: 09 Aug, 2024 Blog: Facebookre:
1980, STL (since renamed SVL) was bursting at the seams and they were moving 300 from the IMS group (and their 3270s) to offsite bldg (about halfway between STL and main plant site), with dataprocessing service back to STL datacenter. They tried "remote" 3270 but found human factors totally unacceptable. I then get con'ed into doing channel extender support, locating channel-attached 3270 controllers at the offsite bldg, resulting in no perceptible difference between off-site and inside STL.
Then the hardware vendor tries to get IBM to release my support ... but there were some POK engineers playing with some serial stuff and get it vetoed (afraid that if it was in the market, it would make it harder to get their stuff release, which they eventually do a decade later as ESCON, when it is already obsolete).
It turns out that STL had been spreading 3270 controllers across the 168&3033 channels with 3830 disk controllers. The channel-extender support had significantly reduced channel-busy (getting 3270 controllers directly off IBM channels) for same amount of 3270 traffic, resulting in 10-15% improvement in system throughput. STL then was considering putting all 3270 channel-attached controllers behind channel-extender support.
channel-extender support
https://www.garlic.com/~lynn/submisc.html#channel.extender
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VMNETMAP Date: 09 Aug, 2024 Blog: Facebookre:
I had first done channel-extender support in 1980 for STL moving 300 people to off-site bldg (which made use of T3 collins digital radio). Then IBM in Boulder was moving a lot of people to bldg across heavy traffic highway. They wanted to use infrared modem on roofs of the two bldgs (eliminated lots of gov. permission) ... however there were snide remarks that Boulder weather would adversely affect the signal. I had fireberd bit-error testers on 56kbit subchannel and did see some bit drops during white-out snow storm when nobody was able to get into work (I had written Turbo Pascal program for PC/ATs that supported up to four ascii inputs from bit error testers for keeping machine readable logs).
The big problem was sunshine ... heating of the bldgs on one side during the day (resulting in imperceptible lean), slightly changed the focus of the infrared beam between the two (roof mounted) modems. It took some amount of trial and error to compensate for bldg sunshine heating.
Then my HSDT project got custom built 3-node Ku-band satellite system (4.5m dishes in Los Gatos and Yorktown and 7m dish in Austin) with transponder on SBS4. Yorktown community meeting complained about radiation from the 25watt signal. It was pointed out if they were hung directly above dish transmission, it would be significantly less radiation than they were currently getting from local FM radio tower transmission.
channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Irrationality of Markets Date: 09 Aug, 2024 Blog: FacebookThe Irrationality of Markets
Commodities market use to have rule that only entities with
significant holdings could play ... because speculators resulted in
wild, irrational, volatility (they live on vlatility, bet on direction
of market change and then manipulate to see it happens, bet/pump&dump
going up and then bet on going down). GRIFTOPIA
https://www.amazon.com/Griftopia-Machines-Vampire-Breaking-America-ebook/dp/B003F3FJS2/
has chapter on commodity market secret letters allowing specific
speculators to play, resulting in the huge oil spike summer of
2008. Fall of 2008, member of congress released details of speculator
transactions responsible for the huge oil price spike/drop ... and the
press, instead of commending the congressman, pillared him for
violating privacy of those responsible.
old interview mentions the illegal activity goes on all the time in
equity markets (even before HFT)
https://nypost.com/2007/03/20/cramer-reveals-a-bit-too-much/
SEC Caught Dark Pool and High Speed Traders Doing Bad Stuff
https://web.archive.org/web/20140608215213/http://www.bloombergview.com/articles/2014-06-06/sec-caught-dark-pool-and-high-speed-traders-doing-bad-stuff
Fast money: the battle against the high frequency traders; A 'flash
crash' can knock a trillion dollars off the stock market in minutes as
elite traders fleece the little guys. So why aren't the regulators
stepping in? We talk to the legendary lawyer preparing for an epic
showdown
http://www.theguardian.com/business/2014/jun/07/inside-murky-world-high-frequency-trading
The SEC's Mary Jo White Punts on High Frequency Trading and Abandons
Securities Act of 1934
http://www.nakedcapitalism.com/2014/06/secs-mary-jo-white-punts-high-frequency-trading-abandons-securities-act-1934.html
griftopia posts
https://www.garlic.com/~lynn/submisc.html#griftopia
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
posts mentioning illegal activity and high frequency trading
https://www.garlic.com/~lynn/2022b.html#96 Oil and gas lobbyists are using Ukraine to push for a drilling free-for-all in the US
https://www.garlic.com/~lynn/2021k.html#96 'Most Americans Today Believe the Stock Market Is Rigged, and They're Right'
https://www.garlic.com/~lynn/2019b.html#11 For The Average Investor, The Next Bear Market Will Likely Be The Last
https://www.garlic.com/~lynn/2018f.html#105 Is LINUX the inheritor of the Earth?
https://www.garlic.com/~lynn/2018f.html#104 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2017c.html#22 How do BIG WEBSITES work?
https://www.garlic.com/~lynn/2015g.html#47 seveneves
https://www.garlic.com/~lynn/2015g.html#46 seveneves
https://www.garlic.com/~lynn/2015c.html#17 Robots have been running the US stock market, and the government is finally taking control
https://www.garlic.com/~lynn/2015.html#58 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2014g.html#109 SEC Caught Dark Pool and High Speed Traders Doing Bad Stuff
https://www.garlic.com/~lynn/2014f.html#20 HFT, computer trading
https://www.garlic.com/~lynn/2014e.html#72 Three Expensive Milliseconds
https://www.garlic.com/~lynn/2014e.html#18 FBI Investigates High-Speed Trading
https://www.garlic.com/~lynn/2013d.html#54 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012c.html#13 Study links ultrafast machine trading with risk of crash
https://www.garlic.com/~lynn/2011l.html#21 HOLLOW STATES and a CRISIS OF CAPITALISM
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 138/148 Date: 09 Aug, 2024 Blog: Facebookafter Future System implodes, Endicott asks me to help with Virgil/Tully (aka 138/148), they wanted to do microcode assist (competitive advantage especially in world trade). I was told there was 6kbytes microcode space and 6kbytes of 370 instructions would approx. translate into 6kbytes of microcode instructions running ten times faster. I was to identify the 6kbytes of highest executed kernel code. old archive post with initial analysis
basically 6kbytes accounted for 79.55% of kernel execution (and moved to microcode would run ten times faster). Then they wanted me to run around the world presenting business case to US & world-trade business planners and forecasters. I was told that US region forecasters got promoted for forecasting what ever corporate told them was strategic and world-trade forecasters could get fired for bad forecasts. One of the differences was bad US region forecasts had to be "eaten" by the factory while world-trade forecasts, factories shipped to the ordering country (factories tended to redo US region forecasts to be on the safe side). In any case the US region 138/148 forecasts were it didn't make any difference the features, they would sell some percent more than 135/145. On the other hand, the world-trade forecasters said without distinct/unique features they wouldn't sell any 138/148s because of competition with the clone 370 makers.
Then Endicott tried to convince corporate that VM370 be pre-installed on every 138/148 system (somewhat like current PR/SM & LPAR). This was in period when head of POK was convincing corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA ... and Endicott was having to fight just to preserve VM370. Endicott wasn't able to get permission to ship every 138/148 with VM370 pre-installed, but they managed to save the VM370 product mission (for the mid-range), but had to recreate a development group from scratch.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
some other posts mentioning 138/148 ecps
https://www.garlic.com/~lynn/2023f.html#57 Vintage IBM 370/125
https://www.garlic.com/~lynn/2013o.html#82 One day, a computer will fit on a desk (1974) - YouTube
https://www.garlic.com/~lynn/2012d.html#70 Mainframe System 370
https://www.garlic.com/~lynn/2007s.html#36 Oracle Introduces Oracle VM As It Leaps Into Virtualization
https://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: VMNETMAP Date: 09 Aug, 2024 Blog: Facebookre:
One of the supposed strings attached to the HSDT funding was I was suppose to show some IBM content. CPD did have T1 2701 in the 60s ... but then possibly got stuck at 56kbits because of VTAM shortcomings. I was able to find the S/1 Zirpel T1 card from FSD (apparently for some of the gov. customers that had failing 2701s). I go to order a few S/1 and find that there was year's backlog, apparently the recently purchased ROLM had ordered a whole slew of S/1 creating the backlog. I knew the director of ROLM datacenter from their IBM days and managed to cut a deal for some S/1 order positions if I would help ROLM with some of their problems.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
some posts that mentioning zirpel, rolm, s/1
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2023f.html#44 IBM Vintage Series/1
https://www.garlic.com/~lynn/2023c.html#35 30 years ago, one decision altered the course of our connected world
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#84 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2006n.html#25 sorting was: The System/360 Model 20 Wasn't As Bad As All That
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Disk Capacity and Channel Performance Date: 10 Aug, 2024 Blog: Facebookre:
when I transfer out to SJR, got to wander around datacenters in silicon valley, including bldg14&5 (disk engineering and product test) across the street. Bldg14 had multiple layers of physical security, including machine room had each development box inside heavy mesh locked cage ("testcells"). They were running 7x24, pre-scheduled, stand-alone testing and had mentioned that they had recently tried MVS ... but it had 15min mean-time-between-failure (in that environment) requiring manual re-ipl. I offered to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing greatly improving productivity. The downside was they got in habit of blaming my software for problems and I would have to spending increasing amount of time playing disk engineer diagnosing their problems.
Then bldg15 got 1st engineering 3033 outside POK processor engineering flr. Since testing only took percent or two of the processor, we scrounged up a 3830 disk controller and string of 3330 disk and setup our own private online service (and ran 3270 coax under the street to my 3277 in sjr/28). One monday morning get a call asking what had I had done over the weekend to destroy 3033 throughput. After some back&forth eventually discover that somebody had replaced the 3830 disk controller with engineering 3880 controller. The engineers had been complaining about how the bean counters had dictated that 3880 have really slow (inexpensive?) microprocessor for most of operations (other than actual data transfers) ... it really slowed down operations. To partially mask how slow it really was, they were trying to present end-of-operation interrupt early, hoping that they could finish up overlapped with software interrupt handling (wasn't working, software was attempting to redrive with queued I/O, controller then had to respond with control unit busy (SM+BUSY), requeue the attempted redrive, and wait for control unit end (longer elapsed time, higher channel busy, and higher software overhead).
The early interrupt was tuned to not cause as bad a problem ... but the higher channel busy still persisted. Later the 3090/trout had configured number of channels for target system throughput (assuming 3880 was same as 3830 but with 3380 3mbyte data transfer). When they found out how bad 3880 was, they realized they had to greatly increase the number of channels (to meet throughput target), which also required an additional TCM (semi-facetious they said they would bill the 3880 business for the higher 3090 manufacturing costs). Marketing eventually respins the large increase in 3090 channels as great I/O machine (rather than countermeasure for the increased 3880 channel busy).
I had also written an (internal only) research report on the I/O reliability work for bldgs14&15 and happen to mention the MVS 15min MTBF ... bringing the wrath of the MVS organization down on my head. Later, just before 3880/3380 was about to ship, FE had a regression test of 57 simulated errors that were likely to occur. In all 57 cases, MVS was still crashing (requiring re-ipl) and in 2/3rds of the case, no indication of what caused the failure. I didn't feel bad about it.
bldg26 long 1story ... mostly machine room lots of mvs systems ... long gone. engineering, bldg14 two story across strt ... last time checked it was one of the few still standing ... machine room 2nd flr ... with testcell wire cages. after earthquake bldgs under went earthquake remediation ... when it came to bldg14, engineering was temporarily relocated to non-ibm bldg. south of main plant site
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
some recent posts mentioning 3090 had to greatly increase
number of channels
https://www.garlic.com/~lynn/2024d.html#95 Mainframe Integrity
https://www.garlic.com/~lynn/2024d.html#91 Computer Virtual Memory
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#19 IBM Millicode
https://www.garlic.com/~lynn/2024b.html#115 Disk & TCP/IP I/O
https://www.garlic.com/~lynn/2024b.html#48 Vintage 3033
https://www.garlic.com/~lynn/2024.html#80 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#57 Future System, 115/125, 138/148, ECPS
https://www.garlic.com/~lynn/2023g.html#26 Vintage 370/158
https://www.garlic.com/~lynn/2023f.html#62 Why Do Mainframes Still Exist
https://www.garlic.com/~lynn/2023f.html#36 Vintage IBM Mainframes & Minicomputers
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2023d.html#103 The IBM mainframe: How it runs and why it survives
https://www.garlic.com/~lynn/2023d.html#18 IBM 3880 Disk Controller
https://www.garlic.com/~lynn/2023d.html#4 Some 3090 & channel related trivia:
https://www.garlic.com/~lynn/2023c.html#45 IBM DASD
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#66 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2022.html#13 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#122 Mainframe "Peak I/O" benchmark
https://www.garlic.com/~lynn/2021j.html#92 IBM 3278
https://www.garlic.com/~lynn/2021i.html#30 What is the oldest computer that could be used today for real work?
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021.html#60 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Implicit Versus Explicit "Run" Command Newsgroups: alt.folklore.computers Date: Sat, 10 Aug 2024 13:05:43 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Gary was at NPG school and working on IBM CP67/CMS ... some of the MIT 7094/CTSS people had gone to 5th flr for multics and others went to the 4th flr to IBM science center and did CP40/CMS ... CMS (cambridge monitor system) was originally developed to run on real 360/40 ... while they were doing 360/40 hardware mods to support virtual memory ... and then ran in "CP40" (control program 360/40) virtual machines (CP40/CMS morphs into CP67/CMS when 360/67 standard with virtual memory became available, later morphs into vm370/cms ... where "cms" becomes conversational monitor system).
CMS search order as "P" filesysteam originally 2311 (then minidisk
when CP40 virtual machines became available) ... more details:
https://bitsavers.org/pdf/ibm/360/cp67/GY20-0591-1_CMS_PLM_Oct1971.pdf
filesystem/disks "pg4" (PDF13), "P" (primary user), "S" (system), then possibly "A, B, & C" user files, "T" (temporary/work).
filesystem/disk search order "pg34" (PDF43): P, T, A, B, S, C
Execution control "119" (PDF130) executable, type: "TEXT" (output of compiler/assembler), "EXEC" (aka shell scripts), and "MODULE" (memory image) ... if just filename specified (& not type specified) ... it searches for filename for each type ... in filesystem/disk order ... until match found.
ibm science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
other posts referencing CMS PLM
https://www.garlic.com/~lynn/2017j.html#51 IBM 1403 Printer Characters
https://www.garlic.com/~lynn/2013h.html#85 Before the PC: IBM invents virtualisation
https://www.garlic.com/~lynn/2013h.html#75 cp67 & vm370
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Gene Amdahl Date: 10 Aug, 2024 Blog: LinkedinAmdahl had won the battle to make ACS 360-compatible. Folklore is then executives were afraid it would advance the state of the art too fast and IBM would loose control of the market, and it is killed, Amdahl leaves shortly later (lists some features that show up more than two decades later with ES/9000)
Not long after Amdahl leaves, IBM has Future System effort, completely
different from 370 and was going to completely replace 370; internal
politics during FS was killing off 370 projects, the dearth of new
370s during FS, is credited with giving the clone 370s makers
(including Amdahl) their market foothold
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
one of the last nails in the FS coffin was IBM Houston Science Center
analysis if 370/195 applications were redone for a FS machine made out
of the fastest hardware technology available, they would have
throughput of 370/145 (about 30 times slowdown)
When I join IBM one of my hobbies was enhanced production operating systems for internal datacenters and the online branch office sales&market support HONE systems were long time customer ... and all during FS, I continued to work on 360/370 and would periodically ridicule what FS was doing (which wasn't exactly career enhancing). Then the US HONE systems were consolidated in Palo Alto with single-system-image, loosely-coupled, shared DASD including load-balancing and fall-over across the complex.
In the morph of CP67->VM370, lots of stuff was simplified and/or dropped (including tightly-coupled, shared memory multiprocessing). I then add SMP/shared-memory support back to VM370, initially for US HONE so they could add 2nd processor to each system (for 16 processors total and am getting each 2processor system throughput twice single processor).
When FS finally implodes there is mad rush to get stuff back into 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. I also get roped into helping with a 16processor, tightly-coupled, shared memory 370 multiprocessor and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 370/168 logic to 20% faster chips).
At first everybody thought it was great until somebody tells the head of POK that it could be decades before POK favorite son operating system (i.e. "MVS") had (effective) 16-way support (at the time, MVS documents had 2processor SMP with only 1.2-1.5 times the throughput of single processor, aka MVS multiprocessor overhead ... note POK doesn't ship a 16processor system until after the turn of the century). Then head of POK invites some of us to never visit POK again and directs 3033 processor heads down, no distractions.
3081 originally was going to be multiprocessor only ... and each 3081D processor was suppose to be faster than 3033 ... but several benchmarks were showing them slower. Then the processor cache size is doubled for 3081K and the aggregate 2-processor MIP rate was about the same as the Amdahl single processor (and MVS 3081K throughput much less because of its SMP overhead, aka approx .6-.75 of a Amdahl single processor).
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Gene Amdahl Date: 11 Aug, 2024 Blog: Linkedinre:
SHARE Original/Founding Knights of VM
http://mvmua.org/knights.html
IBM Mainframe Hall of Frame
https://www.enterprisesystemsmedia.com/mainframehalloffame
IBM System Mag article (some history details slightly garbled)
https://web.archive.org/web/20200103152517/http://archive.ibmsystemsmag.com/mainframe/stoprun/stop-run/making-history/
late 70s and early 80s, I was also blamed for online computer
conferencing on the internal network (larger than arpanet/internet
from just about the beginning until sometime mid/late 80s), folklore
was that when corporate executive committee was told, 5of6 wanted to
fire me; it had really taken off spring of 1981 when I distributed
trip report of visit to jim gray (departed IBM SJR for Tandem fall
1980), only about 300 participated but claims that 25,000 was reading,
from when IBM Jargon was young and "Tandem Memos" was new
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh
of breath air (sic). That's another Tandem Memos. A phrase to worry
middle management. It refers to the computer-based conference (widely
distributed in 1981) in which many technical personnel expressed
dissatisfaction with the tools available to them at that time, and
also constructively criticized the way products were [are]
developed. The memos are required reading for anyone with a serious
interest in quality products. If you have not seen the memos, try
reading the November 1981 Datamation summary.
... snip ...
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008 Date: 11 Aug, 2024 Blog: FacebookThe Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008
Financial houses 2001-2008 did over $27T in securitizing mortgages/loans; securitizing loans/mortgages, paying for triple-A rating (when rating agencies knew they weren't worth triple-A, from Oct2008 congressional hearings) and selling into the bond market. YE2008, just the four largest too-big-to-fail were still carrying $5.2T in offbook toxic CDOs.
Jan1999 I was asked to help prevent the coming economic mess. I was told some investment bankers had walked away "clean" from the "S&L Crisis", where then running Internet IPO mills (invest a few million, "hype", then IPO for a couple billion, needed to fail to leave field clear for next round; were predicted to next get into securitized mortgages/lonas). I was to help improve the integrity of securitzed loan/mortgage supporting documents (as countermeasure). Then they found they could start doing no-document, liar mortgage/loans, securitize, pay for triple-A, and sell into the bond market ("no documents", "no integrity")
Then they found they could start doing securitized mortgage/loans designed to fail and then take out CDS gambling bets. The largest holder of CDS gambling bets was AIG and negotiating to pay off at 50cents on the dollar. Then SECTREAS steps in and says they had to sign document that they couldn't sue those making the bets and take TARP funds to pay off at 100cents on the dollar. The largest recipient of TARP funds was AIG and the largest recipient of face value payoffs was firm formally headed by SECTREAS (note with only $700B in TARP funds, it would have hardly made at dent in the toxic CDO problem ... the real too big to fail bailout had to be done by FEDRES).
Later found some of the too-big-to-fail were money laundering for
terrorists and drug cartels (various stories it enabled drug cartels
to buy military grade equipment largely responsible for violence on
both sides of the border). There would be repeated "deferred
prosecution" (promising never to do again, each time) ... supposedly
if they repeated they would be prosecuting (but apparent previous
violations were consistently ignored). Gave rise to
too-big-to-prosecute and too-big-to-jail ... in addition to
too-big-to-fail.
https://en.wikipedia.org/wiki/Deferred_prosecution
For Big Companies, Felony Convictions Are a Mere Footnote. Boeing
guilty plea highlights how corporate convictions rarely have
consequences that threaten the business
https://archive.ph/if7H0
The accounting firm Arthur Andersen collapsed in 2002 after
prosecutors indicted the company for shredding evidence related to its
audits of failed energy conglomerate Enron. For years after Andersen's
demise, prosecutors held back from indicting major corporations,
fearing they would kill the firm in the process.
... snip ...
The Sarbanes-Oxley joke was that congress felt so badly about the
Anderson collapse that they really increased the audit requirements
for public companies (as gift to the audit industry). The rhetoric on
flr of congress was that SOX would prevent future ENRONs and guarantee
executives & auditors did jail time ... however it required SEC to do
something. GAO did analysis of public company fraudulent financial
filings showing that it even increased after SOX went into effect (and
nobody doing jail time).
http://www.gao.gov/products/GAO-03-138
http://www.gao.gov/products/GAO-06-678
http://www.gao.gov/products/GAO-06-1053R
The other observation was possibly the only part of SOX that was going to make some difference might be the informants/whistle-blowers (folklore was one of congressional members involved in SOX had been former FBI involved in taking down organized crime and supposedly what made if possible were informants/whistle-blowers) ... and SEC had a 1-800 for companies to complain about audits, but no whistle-blower hot line.
The head of the administration after turn of century presided over letting the financial responsibility act expire (spending couldn't exceed revenue, on its way to eliminating all federal debt), huge cut in taxes (1st time taxes were cut to NOT pay for two wars), huge increase in spending, explosion in debt (2010 CBO report, 2003-2009 taxes cut by $6T and spending increased $6T for a $12T gap compared to fiscal responsible budget), the economic mess (70 times larger than his father's 80s S&L Crisis) and the forever wars, Cheney is VP, Rumsfeld is SECDEF (again) and one of the Team B members is deputy SECDEF (and major architect of Iraq policy).
economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
federal reserve chairman posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
(triple-A rated) toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
too big to fail (too big to prosecute, too big to jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
TARP posts
https://www.garlic.com/~lynn/submisc.html#tarp
ZIRP posts
https://www.garlic.com/~lynn/submisc.html#zirp
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
Sarbanes-Oxley posts
https://www.garlic.com/~lynn/submisc.html#sarbanes.oxley
whistle-blower posts
https://www.garlic.com/~lynn/submisc.html#whistleblower
ENRON posts
https://www.garlic.com/~lynn/submisc.html#enron
financial reporting fraud posts
https://www.garlic.com/~lynn/submisc.html#financial.reporting.fraud
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
Team B posts
https://www.garlic.com/~lynn/submisc.html#team.b
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Instruction Tracing Newsgroups: comp.arch Date: Sun, 11 Aug 2024 16:58:03 -1000John Levine <johnl@taugh.com> writes:
I helped with white paper that showed that nearly whole 370 could be implemened directly in circuits (much more efficient than microcode) for 4361/4381. AS/400 returned to CISC microprocessor. The follow-on to displaywriter was canceled (most of that market moving to IBM/PC and other personal computers).
Austin group decided to pivot ROMP to Unix workstation market and got the company that had done AT&T UNIX port to IBM/PC as PC/IX to do one for ROMP (AIX, possibly "Austin IX" for PC/RT). They also had to do something with the 200 or so PL.8 programmers and decided to use them to implement an "abstract" virtual machine as "VRM" ... telling the company doing the UNIX port that it would be much more efficient and timely for them to implement to the VRM interface (rather than bare hardware). Besides other issues with that claim, it introduced new problem that new device drivers had to be done twice, one in "C" for the unix/AIX layer and then in "PL.8" for the VRM.
Palo Alto was working on a port of UCB BSD to 370 and got redirected to port to the PC/RT ... they demonstrated that they did the BSD port to ROMP directly ... with much less effort than either the VRM implementation or the AIX implementation ... released as "AOS".
trivia: early 80s 1) IBM Los Gatos lab was working on single chip "Blue Iliad", 1st 32bit 801/RISC, really hot, single large chip that never quite came to fruition and 2) IBM Boeblingen lab had done ROMAN, 3chip 370 implemention (with performance of 370/168). I had proposal to see how many chips I could cram into single rack (either "Blue Iliad" or ROMAN or combination of both).
While AS/400 1st reverted to CISC chip ... later in the 90s, out of the Somerset (AIM, apple, ibm, motorola) single-chip power/pc ... they got a 801/RISC chip to move to.
801/risc, iliad, romp, rios, pc/rt, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Netscape Date: 12 Aug, 2024 Blog: Facebooktrivia: when NCSA complained about their use of "MOSAIC" what silicon valley company did they get "NETSCAPE" from???
I was told CISCO transferred it to them ... supposedly as part of promoting/expanding TCP/IP and the Internet.
I had been brought in as consultant responsible for webserver to payment networks. Two former Oracle people (that I had worked with on HA/CMP and RDBMS cluster scaleup) were there responsible for for something called "commerce server" and wanted to do payment transactions, they also wanted to use "SSL" ... result now frequently called "electronic commerce". I did a talk on "Internet Isn't Business Critical Dataprocessing" based on the software, processes, and documentation I had to do (Postel sponsored talk at ISI/USC).
large mall paradigm supporting multiple stores ... originally funded by telco that was looking for it being service offering ... had conventional leased line for transactions into payment networks. then netscape wanted to offer individual ecommerce webservers with transactions through the internet to payment gateway into payment networks.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Forgotten History of the Financial Crisis. What the World Should Have Learned in 2008 Date: 12 Aug, 2024 Blog: Facebookre:
Note: CIA Director Colby wouldn't approve "Team B" report/analysis
that had inflated Soviet military, part of justifying large DOD budget
increase. White House Chief of Staff Rumsfeld then gets Colby replaced
with Bush1 (who would approve it), after which Rumsfeld resigns and
becomes SECDEF (and Rumsfeld's assistant Cheney becomes Chief of
Staff).
https://en.wikipedia.org/wiki/Team_B
Then Bush1 becomes VP and claims he knows nothing about
https://en.wikipedia.org/wiki/Iran%E2%80%93Contra_affair
because he was full-time administration point person deregulating the
financial industry causing the S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of his family
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260
Republicans and Saudis bailing out the Bushes.
and Bush1 and Rumsfeld are also working with Saddam, supporting Iraq
in
http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War
including supplying WMDs
http://en.wikipedia.org/wiki/United_States_support_for_Iraq_during_the_Iran%E2%80%93Iraq_war
In the early 90s, Bush1 is president and Cheney is SECDEF. Sat. photo
recon analyst told white house that Saddam was marshaling forces to
invade Kuwait. White house said that saddam would do no such thing and
proceeded to discredit the analyst. Later the analyst informed the
white house that saddam was marshaling forces to invade Saudi Arabia,
now the white house has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/
This century, Bush2 is president, Cheney is VP, Rumsfeld is SECDEF
(again), and one of the "Team B" members is deputy SECDEF and credited
with the Iraq policy
https://en.wikipedia.org/wiki/Paul_Wolfowitz
Cousin of White House Chief of Staff Card, was dealing with the Iraqis
at the UN and was given evidence that WMDs (tracing back to US in the
Iran/Iraq war) had been decommissioned. the cousin shared this with
Card, Powell and others ... then is locked up in military hospital,
book was published in 2010 (before decommissioned WMDs were
declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/
NY Times series from 2014, the decommission WMDs (tracing back to US
from Iran/Iraq war), had been found early in the invasion, but the
information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html
... military-industrial-complex wanted a war so badly that corporate
reps were telling former eastern block countries that if they voted
for IRAQ2 invasion in the UN, they would get membership in NATO and
(directed appropriation) USAID (can *ONLY* be used for purchase of
modern US arms). From the law of unintended consequences, the invaders
were told to bypass ammo dumps looking for WMDs, when they got around
to going back, over a million metric tons had evaporated (later
showing up in IEDs).
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/
Military-Industrial(-Congressional) Complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
Team B posts
https://www.garlic.com/~lynn/submisc.html#team.b
S&L Crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Netscape Date: 12 Aug, 2024 Blog: Facebookre:
trivia: for the transactions through the internet had redundant gateways with multiple connections into various (strategic) locations around the network. I wanted to do router updates ... but backbone was in the process of transition to hierarchical routing ... so had to make do with multiple (DNS) "A-records". I then was giving class to 20-30 recent graduate paper millionaire employees (mostly working on browser) on business critical dataprocessing ... including A-record operation and show support examples from BSD4.3 reno/tahoe clients ... and getting push back that it was too complex. Then I started making references to if it wasn't in Steven's book, they wouldn't do it. It took me a year to get multiple A-record support into the browser.
One of the first e-store was large sporting good operation that was doing TV advertising during weekend national football games ... this was in period when ISPs were still doing weekend rolling maintenance downtime windows. And even with their webserver having multiple connections to different parts of the internet, there were browsers that couldn't get to connection (because of missing multiple A-record support).
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
some posts mentioning netscape and multiple a-records
https://www.garlic.com/~lynn/2017h.html#47 Aug. 9, 1995: When the Future Looked Bright for Netscape
https://www.garlic.com/~lynn/2009o.html#40 The Web browser turns 15: A look back;
https://www.garlic.com/~lynn/2005i.html#9 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS
https://www.garlic.com/~lynn/aepay4.htm#comcert17 Merchant Comfort Certificates
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Netscape Date: 13 Aug, 2024 Blog: Facebookre:
trivia: some of the MIT CTSS/7094 people had gone to the 5th flr for Multics, others went to to the science center on 4th flr to do virtual machines and bunch of online stuff. science center wanted a 360/50 to hardware modify with virtual memory, but all the extra 50s were going to the FAA ATC product, so they had to settle for 360/40. They did "CMS" on the bare 369/40 hardware in parallel with the 40 hardware mods for virtual memory and development of virtual machine CP40 (then CMS is moved to virtual machine and it is CP40/CMS). Then when 360/67 comes available standard with virtual memory, CP40/CMS morphs into CP67/CMS. Later CP67/CMS morphs into VM370/CMS (after decision to add virtual memory to all 370 machines).
GML is invented at the science center in 1969 and GML tag support is
added to SCRIPT (which was rewrite of CTSS RUNOFF for CMS). A decade
later, GML morphs into ISO standard SGML and after another decade
morphs into HTML at CERN. The 1st webserver in the US is at (CERN
sister site) Stanford SLAC on its VM370/CMS system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
After I had joined science center one of my hobbies had been enhanced production operating systems for internal datacenters and one of my long term customers was the online sales and marketing support HONE systems ... 1st CP67/CMS then morphed into VM370/CMS and all US HONE datacenters (in parallel with clone HONE datacenters were also cropping up all over the world) were consolidated in Palo Alto, single-system-image, shared DASD with load-balancing and fall-over across the large complex of multiprocessors (trivia: when FACEBOOK 1st moves into silicon valley, it is into a new bldg built next door to the former consolidated US HONE datacenter). I had also transferred to San Jose Research and we would have monthly user group meetings at SLAC.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Netscape Date: 13 Aug, 2024 Blog: Facebookre:
some internet trivia: primary person (before) inventing GML (in 1969),
was hired to promote Cambridge's CP67 wide-area network (RSCS/VNET,
which morphs into the internal corporate network, larger than
arpanet/internet from just about the beginning until sometime mid/late
80s, technology also used for corporate sponsored Univ BITNET)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
Some of us transfer out to SJR in 2nd half of 70s (including Edson
responsible for RSCS/VNET & CP67 wide-area network). In early 80s, I
get HSDT project, T1 and faster computer links (both terrestrial and
satellite) and was working with NSF director; was suppose to get $20M
to interconnect the NSF supercomputer centers. Then congress cuts the
budget, some other things happen and eventually a RFP is released (in
part based on what we already had running) .From 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... NCSA (national center supercomputer applications) got some of the
funding
http://www.ncsa.illinois.edu/enabling/mosaic
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
Edson,
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Netscape Date: 13 Aug, 2024 Blog: Facebookre:
possibly more than you ever wanted to know:
23june1969, IBM unbundling, starting to charge for software, services, maint., tramatic for mainstream software development; requirement that revenue covers original development, ongoing development&support; had forecast process for number of customers at low, medium and high price ... some mainstream software couldn't meet revenue at any forecasted price. One was batch (MVS) system JES2 offering network support; NJE.
RSCS/VNET would meet requirement (with large profit) even at lowest possible monthly price of $30/month. RSCS/VNET had done an emulated NJE driver to allow connection of MVS batch systems in internal network. However NJE had some short comings, code had come from HASP and had "TUCC" in cols 68-71 (from univ. where it originated), it used spare entries in the 255 table of pseudo (spool) devices (usually around 160-180) and would trash traffic where origin and/or destination weren't in local table. As a result internal JES2/NJE systems had to be restricted to boundary nodes (hidden behind RSCS/VNET filter) since internal network was approaching 700 at the time.
Also the early 70s "Future System" project had recently imploded (different from 360/370 and was going to completely replace it, during FS internal politics had been killing off 370 efforts) and there was mad rush to get stuff back into the 370 product pipelines. Also the head of POK (mainstream batch operation) had managed to convince corporate to kill VM370/CMS product, shutdown the development group and transfer all the people to POK for MVS/XA ... and was veto'ing any announcement of RSCS/VNET (for customers)
JES2 rides in as savior(?), if RSCS/VNET could be announced as a joint/combined product with (MVS) JES2/NJE, each at $600/month ... the enormous projected RSCS/VNET revenue (especially at $600/month) could be used to meet MVS JES2/NJE revenue requirement. The Endicott lab (entry/mid range 370s) had managed to save the VM370 product mission (for mid-range 370s) but had to recreate a development group from scratch. Then there was big explosion in (Endicott mid-range) VM/4341s ... large corporations with orders for hundreds of machines at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami). Also, Jan1979 I get con'ed into doing some benchmarks on a engineeering VM/4341 for national lab looking at getting 70 for a compute farm (sort of leading edge of the coming cluster supercomputing tsunami).
some of bitnet (1981) could also be credited to price/performance and
explosion in vm/4341s
https://en.wikipedia.org/wiki/BITNET
.... inside IBM, all the (internal) new vm/4341s helped push the internal network over 1000 in 1983. I was having constant HSDT (T1 and faster computer links, both TCP/IP and non-SNA RSCS/VNET) battles with the communication product group. IBM had 2701 supporting T1 in 60s ... but in the 70s, mainstream move to SNA/VTAM appeared to cap all the standard products at 56kbits (because of VTAM short comings?)
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HASP, ASP, JES2, JES3, NJE, NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Netscape Date: 13 Aug, 2024 Blog: Facebookre:
Other 4341 folklore: IBM communication group was blocking release of mainframe TCP/IP support, part of fiercely fighting off distributed computing and client/server (trying to preserve their dumb terminal paradigm). When that got overturned, they changed their tactic and declared since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them. What shipped got aggregate of 44kbytes using nearly whole 3090 processor. I then do support for RFC1044 and in some tuning tests at Cray Research between Cray and 4341, get sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
RFC1044 support posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: PROFS Date: 13 Aug, 2024 Blog: FacebookPROFS group was collecting internal apps for wrapping 3270 menus around.
Hot topic at Friday's after work was some killer app that would attract mostly computer illiterate employees. One was email ... another was online telephone book. Jim Gray would spend one week doing telephone lookup app ... lookup had to be much faster than reaching for paper book on the desk and finding the number ... and I was to spend a week collecting organization softcopy of printed paper books to reformat into telephone book format.
There was a rapidly spreading rumor that members of the executive committee were communicating via email. This was back when 3270 terminals were part of annual budget and required VP level sign-off ... and then find mid-level executives rerouting 3270 deliveries to their desks (and their administrative assistant) ... to make it appear like they might be computer literate. There were enormous numbers of 3270 that spent their life with the VM370 logon logo (or possibly PROF menu) being burned into screen (with admin actually handling things like email). This continued at least through some of the 90s with executives rerouting PS2/486 and 8514 screens to their desks (partly used for 3270 emulation & burning VM370 logo or PROFS menu and partly status symbols)
PROFS got source for very early VMSG for the email client ... then when VMSG author tried to offer them a much more mature and enhanced version ... an attempt was made to fire him. The whole thing quieted down when he showed all PROFS messages had his initials in every PROFS email (non-displayed field). After that he only shared his source with me and one other person.
Later I remember somebody claiming that when congress demanded all executive branch PROFS notes involving CONTRA .... had to find somebody with every possible clearance to scan the backup tapes.
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
some posts mentioning profs, vmsg, contra
https://www.garlic.com/~lynn/2022f.html#64 Trump received subpoena before FBI search of Mar-a-lago home
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2019d.html#96 PROFS and Internal Network
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#20 IBM Profs
https://www.garlic.com/~lynn/2018.html#18 IBM Profs
https://www.garlic.com/~lynn/2017b.html#74 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#76 PROFS
https://www.garlic.com/~lynn/2014.html#13 Al-Qaeda-linked force captures Fallujah amid rise in violence in Iraq
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013f.html#69 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2012e.html#55 Just for a laugh... How to spot an old IBMer
https://www.garlic.com/~lynn/2012d.html#47 You Don't Need a Cyber Attack to Take Down The North American Power Grid
https://www.garlic.com/~lynn/2011i.html#6 Robert Morris, man who helped develop Unix, dies at 78
https://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group
https://www.garlic.com/~lynn/2011e.html#57 SNA/VTAM Misinformation
https://www.garlic.com/~lynn/2011b.html#83 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2002h.html#64 history of CMS
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: OSI and XTP Date: 13 Aug, 2024 Blog: FacebookAt Interop '88, I was surprised at all the OSI booths.
I was on Greg Chesson's XTP TAB ... and some gov. groups were involved ... so took it to ISO charted ANSI X3S3.3 for layer 4&3, as "HSP". Eventually X3S3.3 said that ISO required that standards work can only be done for protocols that conform to OSI model ... XTP didn't qualify because 1) supported internetworking (aka TCP/IP) which doesn't exist in OSI, 2) skipped layer 4/3 interface, and 3) went directly to LAN MAC interface, doesn't exist in OSI model.
Then joke was that IETF required at least two interoperable implementations before proceeding in standards while ISO didn't even require a standard to be implementable.
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Former AMEX President and New IBM CEO Date: 14 Aug, 2024 Blog: FacebookLearson was CEO and tried (and failed) to block the bureaucrats, careerists and MBAs from destroying Watson culture and legacy ... then 20yrs later IBM (w/one of the largest losses in the history of US companies) was being reorged into the 13 "baby blues" in preparation for breaking up the company
some more Learson detail ... also John Boyd, I had been introduced to
him in the early 80s and would sponsor his talks at IBM. In 89/90, the
commandant of the marine corps leverages Boyd for a corps make-over,
at a time when IBM was desperately in need of a make-over
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
AMEX was in competition with KKR for private equity LBO of RJR and KKR
wins. Then KKR runs into trouble with RJR and hires away AMEX
president to help
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco
Then IBM board hires former AMEX president to help with IBM make-over
... who uses some of the same tactics used at RJR (ref gone 404, but
lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
The former AMEX president then leaves IBM to head up another major
private-equity company
http://www.motherjones.com/politics/2007/10/barbarians-capitol-private-equity-public-enemy/
"Lou Gerstner, former ceo of ibm, now heads the Carlyle Group, a
Washington-based global private equity firm whose 2006 revenues of $87
billion were just a few billion below ibm's. Carlyle has boasted
George H.W. Bush, George W. Bush, and former Secretary of State James
Baker III on its employee roster."
... snip ...
aka around the turn of the century, PEs were buying up beltway bandits and government contractors and channeling as much funds possible into their own pockets, also hiring prominent politicians to lobby congress to outsource gov. to their companies (laws exist that companies can't directly use money from gov. contracts for lobbying) ... poster child were companies doing outsourced high security clearances but were found to just doing paper work, but not actually doing background checks.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Former AMEX President and New IBM CEO Date: 14 Aug, 2024 Blog: Facebookre:
... turning into a financial engineering company, Stockman; The Great
Deformation: The Corruption of Capitalism in America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.
... snip ...
(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts
Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket
more financial engineering company
IBM deliberately misclassified mainframe sales to enrich execs,
lawsuit claims. Lawsuit accuses Big Blue of cheating investors by
shifting systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO
Ginni Rometty propped up its stock price and deceived shareholders by
misclassifying revenues from its non-strategic mainframe business -
and moving said sales to its strategic business segments - in
violation of securities regulations.
... snip ...
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Token-Ring, Ethernet, FCS Date: 15 Aug, 2024 Blog: FacebookIBM AWD did their own PC/RT (16bit PC/AT bus) 4mbit token-ring card ... however for RS/6000 microchannel, AWD was directed that they couldn't do their own cards, but had to use PS2 microchannel cards. The communication group was fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm) and had severely performance kneecapped PS2 cards (including 16mbit T/R) ... and the microchannel 16mbit T/R had lower card throughput than the PC/RT 4mbit T/R card. The joke was that a RS/6000 16mbit T/R server would have lower throughput than a PC/RT 4mbit T/R server.
The new IBM Almaden Research bldg was heavily provisioned with CAT wiring apparently assuming 16mbit T/R, but found 10mbit Ethernet had higher aggregate LAN throughput and lower latency (than 16mbit T/R) ... and $69 10bit Ethernet cards (over IBM CAT wiring) also had higher card throughput than $800 16mbit T/R card (as well as higher card throughput than PC/RT 4mbit T/R card). They also found with the difference in giving every station $69 10bmit Ethernet cards (compared to $800 16mbit T/R card), they could buy 4 or 5 high performance routers with mainframe channel interfaces and 16 high-performance 10mbit Ethernet interfaces ... so there were was only a score of stations sharing each Ethernet LAN (1988 ACM SIGCOMM article analyzing 10mbit Ethernet had cards sustained 8.5mbits/sec and 30 stations in low-level device driver loop constantly transferring minimum sized packets, sustained effective LAN throughput dropped off to 8mbits/sec).
The communication group had also been fiercely fighting off release of mainframe TCP/IP support, but when that got reversed, they changed their tactic and said that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be release through them ... what shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 processor. I then did RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
AWD had also done an enhanced version of IBM mainframe ESCON protocol for RS/6000 as "SLA" (incompatible with everybody else, except other RS/6000s) that was full-duplex and 10% higher sustained transfer, concurrently in both directions. We con'ed one of the high-speed router vendors to add SLA support (in addition to IBM&non-IBM mainframe channels, FDDI, Ethernet, T1, T3, etc).
In 1988, IBM branch office had asked if I could help LLNL (national lab) get standardized some serial stuff they had been working with, which quickly becomes fibre-channel standard ("FCS", initially full-duplex, 1gbit concurrent in both directions). AWD had been playing with doing a 800mbit version of RS/6000 SLA, but convince them to work with "FCS" instead.
FCS & FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
recent posts mentioning Almaden Research and Ethernet
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#69 IBM Token-Ring
https://www.garlic.com/~lynn/2024c.html#56 Token-Ring Again
https://www.garlic.com/~lynn/2024c.html#47 IBM Mainframe LAN Support
https://www.garlic.com/~lynn/2024b.html#50 IBM Token-Ring
https://www.garlic.com/~lynn/2024b.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#22 HA/CMP
https://www.garlic.com/~lynn/2024.html#117 IBM Downfall
https://www.garlic.com/~lynn/2024.html#97 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#41 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#33 RS/6000 Mainframe
https://www.garlic.com/~lynn/2024.html#5 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#76 Another IBM Downturn
https://www.garlic.com/~lynn/2023c.html#91 TCP/IP, Internet, Ethernett, 3Tier
https://www.garlic.com/~lynn/2023c.html#49 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2023c.html#6 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#83 IBM's Near Demise
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2023b.html#34 Online Terminals
https://www.garlic.com/~lynn/2023.html#77 IBM/PC and Microchannel
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#18 Strange chip: Teardown of a vintage IBM token ring controller
https://www.garlic.com/~lynn/2022f.html#4 What is IBM SNA?
https://www.garlic.com/~lynn/2022e.html#24 IBM "nine-net"
https://www.garlic.com/~lynn/2022b.html#85 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#84 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#65 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2021j.html#50 IBM Downturn
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#15 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#87 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2021b.html#45 Holy wars of the past - how did they turn out?
https://www.garlic.com/~lynn/2021.html#77 IBM Tokenring
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Dems Prove What Other Countries Know: Negotiating Drug Prices Works Date: 15 Aug, 2024 Blog: FacebookDems Prove What Other Countries Know: Negotiating Drug Prices Works. Under the Biden program, Medicare will pay less than half of the current list prices on nine of the first 10 drugs subject to negotiation
Medicare drug price negotiations: 10 list prices drop between 38% to
79%. If the prices were set in 2023, Medicare would have saved $6
billion.
https://arstechnica.com/science/2024/08/big-name-drugs-see-price-drops-in-first-round-of-medicare-negotiations/
Big Pharma push back on first Medicare drug price cuts
https://medicalxpress.com/news/2024-08-big-pharma-medicare-drug-price.html
White House settles Medicare drug price negotiations. The Medicare
drug price negotiations are slated to save the federal program around
$6 billion across 10 selected medications.
https://www.techtarget.com/revcyclemanagement/news/366604658/White-House-settles-Medicare-drug-price-negotiations
The first major legislation after letting (90s) fiscal responsibility act was allowed to lapse (2002) was MEDICARE PART-D (2003). US Comptroller General said that PART-D was an enormous gift to the pharmaceutical industry, and would come to be a $40T unfunded mandate, dwarfing all other budget items.
CBS 60mins had segment on the 18 Republicans responsible for getting PART-D passed ... just before the final vote, they insert a one sentence change prohibiting competitive bidding (and blocking CBO distributing report on the effect of the last minute change). 60mins showed drugs under MEDICARE PART-D that were three times the cost of identical drugs with competitive bidding. They also found that within 12months of the vote, all 18 Republicans had resigned and were on drug industry payrolls.
mmedicare part-d posts
https://www.garlic.com/~lynn/submisc.html#medicare.part-d
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Dems Prove What Other Countries Know: Negotiating Drug Prices Works Date: 15 Aug, 2024 Blog: Facebookre:
Fiscal Responsibility Act required spending couldn't exceed tax revenue ... on the way to eliminating all federal debt. 2010 CBO report 2003-2009 (corporate and wealth) taxes cut by $6T and spending increased by $6T for a $12T gap compared to fiscal responsible budget (1st time taxes were cut to not pay for two wars). Sort of confluence of special interests wanting huge tax cuts, military-industrial complex wanted huge spending increase and Federal Reserve and Too-Big-To-Fail wanting huge debt increase (Feds provided trillions in ZIRP funds to TBTF, who turned around and used it to buy treasuries).
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
Fed Reserve chair posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman
too-big-to-fail (too-big-to-prosecute, too-big-to-jail) posts
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
ZIRP posts
https://www.garlic.com/~lynn/submisc.html#zirp
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Article on new mainframe use Newsgroups: comp.arch Date: Thu, 15 Aug 2024 15:37:31 -1000mitchalsup@aol.com (MitchAlsup1) writes:
we got HA/6000 project in late 80s, originally for NYTimes to move their newspaper system (ATEX) off VAXCluster to RS/6000; I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commecial cluster scale-up with RDBMS vendors (oracle, sybase, informix, ingres). Out marketing, I coin disaster survivability and "geographic survivabilty"; then the IBM S/88 Product Administrator starts taking us around to their customers and got me to write a section for the corporate continuous available strategy document (it got pulled with both Rochester/AS400 and POK/mainframe complained they couldn't meet the objectives).
In early jan92 meeting with Oracle CEO, AWD/Hester told them that we would have 16processor clusters mid92 and 128processor clusters ye92, but a couple weeks later cluster scale-up is transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we are told we can't work with anything that had more than four processors; we leave IBM a few months later (commercial AS400 & mainframe complaining they couldn't compete likely contributed).
The structure of System/88, a fault-tolerant computer
https://ieeexplore.ieee.org/document/5387672
IBM High Availability Cluster Multiprocessing
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM SAA and Somers Date: 15 Aug, 2024 Blog: FacebookMy wife had been asked to be co-author of response to large gov. RFP where she included 3-tier network architecture. We were then out then making customer executive presentations on TCP/IP, Ethernet, 3tier network, client/server, distributed computing and taking lots of misinformation arrows from the SAA, SNA and token-ring forces. Somebody I had worked with years before was then executive, top floor, corner office in Somers, responsible for SAA and we would drop by periodically to complain how badly some of his people were behaving (90-92 until we left IBM).
I had been introduced to John Boyd in early 80s and use to sponsor his
briefings at IBM. In 89/90, the Commandant of the Marine Corps
leverages Boyd for a Corps make-over ... at a time when IBM was
desperately in need of make-over.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM then has one of the largest losses in the history of US companies
and was being re-organized into the 13 "baby blues" in preparation for
breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup of the company. Before we get
started, the board brings in the former president of AMEX that
(somewhat) reverses the breakup (although it wasn't long before the
disk division is gone).
John Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
3tier network architecture posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: DEBE Date: 16 Aug, 2024 Blog: FacebookI had taken two credit hour intro to fortran/computers and at end of semester was hired to rewrite 1401 MPIO (was unit record front-end, card->tape and tape->printer/punch for 709) for 360/30. Univ was getting 360/67 to replace 709/1401 and temporarily got 360/30 replacing 1401 pending available of 360/67. I was given a bunch of software and hardware manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery/retry, storage management, etc. The univ shutdown datacenter on weekends and I had the whole place dedicated (although 48hrs w/o sleep made monday classes hard). Within a few weeks, I had 2000 card assembler program (that had some of the DEBE-like features). When 360/67 came in, it ran as 360/65 with OS/360 (TSS/360 never was ready) and I was hired fulltime responsible for OS/360.
Jan1968, IBM Science Center came out to install cp67/cms (3rd after Cambridge itself and MIT Lincoln Labs; aka precursor to VM370/CMS) and I mostly got to play with it during my dedicated weekend window.
Lincoln Labs had also done LLMPS, a generalized multiprogramming
supervisor that had been preloaded with most of the DEBE kind of
functions for the contribution to SHARE program library. Univ. of
Michigan also used LLMPS for the initial scaffolding its virtual
memory MTS operating system implementation.
https://en.wikipedia.org/wiki/Michigan_Terminal_System
LLMPS
https://apps.dtic.mil/sti/tr/pdf/AD0650190.pdf
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
some past posts mentioning DEBE, Lincoln Labs, & LLMPS
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2022.html#1 LLMPS, MPIO, DEBE
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2021b.html#26 DEBE?
https://www.garlic.com/~lynn/2018e.html#100 The (broken) economics of OSS
https://www.garlic.com/~lynn/2017g.html#30 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2014j.html#50 curly brace languages source code style quides
https://www.garlic.com/~lynn/2009p.html#76 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2004l.html#16 Xah Lee's Unixism
https://www.garlic.com/~lynn/2002n.html#64 PLX
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM SAA and Somers Date: 16 Aug, 2024 Blog: Facebookre:
Original F-15 was supposedly F-111 follow-on with swing-wing. Boyd redid the design, cutting weight nearly in half, eliminating the swing-wing ... the weight of the pivot had more than offset most of the advantages of having swing-wing.
He then was responsible for the YF16&YF17 (which becomes F16 and F18) and also helped Pierre Sprey with the A10. After Boyd passes, former commandant of the Marine Corps (recently passed last spring) continued to sponsor Boyd conferences at Marine Corps Univ in Quantico.
I was in San Jose Research but also blamed for online computer conferencing on the internal network ... folklore is when corporate executive committee was told, 5of6 wanted to fire me. I was then transferred to Yorktown, left to live in Saj Jose (with offices in SJR & then ALM ... along with part of a wing in Los Gatos lab) but had to commute to YKT a couple times a month.
Another Boyd story was that he was very vocal that the electronics across the trail wouldn't work, then (possibly as punishment) he was put in command of "spook base" (about the same time I was at Boeing), a Boyd biography claims "spook base" was a $2.5B "wind fall" for IBM.
As undergraduate at Univ. I had been hired fulltime responsible for OS/360. Then before I graduate, I was hired fulltime into small group in Boeing CFO office to help with formation Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter possibly largest in world (couple hundred million in 360s, turns out only 1/10th "spook base"). Lots of politics between Renton director and CFO who only had a 360/30 up at Boeing field for payroll (although they enlarge machine room for 360/67 for me to play with when I wasn't doing other stuff).
John Boyd posts and web URLs
https://www.garlic.com/~lynn/subboyd.html
other recent posts mentioining working for Boeing CFO
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#25 IBM 23June1969 Unbundling Announcement
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#97 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#60 Vintage Selectric
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024b.html#44 Mainframe Career
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#77 Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Article on new mainframe use Newsgroups: comp.arch Date: Fri, 16 Aug 2024 08:28:01 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
I was brought in as consultant to small client/server startup, two former Oracle people (that we had worked on doing cluster scale-up for HA/CMP) were there responsible for "commerce server" and wanted to do payment transactions on the server, the startup had also invented SSL they wanted to use, the result is now frequently called ecommerce.
First webservers were flat files ... but later there were increasing RDBMS-based webservers that were experiencing increase number of exploits. One problem was that as part of periodic maintenance, the internet interfaces were shutdown, multiple layers of firewalls and security facilities shutdown; maintenance performed and process reversed to bring webserver back up. WEBSERVER RDBMS mainteance tended to be much more complex and time consuming (than flat file based webservers) ... and they were frequently in big rush to get back online, failed to reactivate all the firewall and security facilities.
trivia: I worked with Jim Gray and Vera Watson on the original SQL/relational implementation, "System/R". Also we were able to do technical transfer to Endicott (under the "radar" while corporation was preoccupied with "EAGLE", next great DBMS) for SQL/DS. Then when "EAGLE" imploded, there was a request for how fast could "System/R" be ported to MVS (eventually released as DB2).
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
working on ecommerce posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Early Networking Date: 16 Aug, 2024 Blog: FacebookEdson responsible for the science center wide-area network in the 60s that morphs into the internal corporate network ... larger than arpanet/internet from just about beginning until sometime mid/late 80s ... technology also used for the corporate sponsored univ BITNET. Account by one of the inventors of GML at the science center in 1969 (decade later morphs into ISO standard SGML and after another decade mophs into HTML at CERN) ... had been originally brought in to promote the wide-area network:
... aka CP67/CMS was precursor to VM370/CMS ... TYMSHARE made their
VM370/CMS-based online computer conferencing system available "free"
to (mainframe user group) SHARE in Aug1976 ... archives:
http://vm.marist.edu/~vmshare
I had cut a deal with TYMSHARE for monthly tape dump of all VMSHARE (and later PCSHARE) files for making available on internal systems and network.
I got HSDT project in early 80s, T1 and faster computer links (both terrestrial and satellite) ... lots of battles with communication product group ... for some reason they had "2701" in the 60s supporting T1 speeds, but apparently problems with the cutover to SNA/VTAM in the mid-70s, capped their products at 56kbit/sec.
At the time of the great cutover to internetworking 1/1/1983, the
internal corporate network was rapidly approaching 1000 hosts, which
it soon passes. Archived post with list of corporate locations
world-wide adding one or more nodes:
https://www.garlic.com/~lynn/2006k.html#8
other Edson/science center trivia:
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
Edson
https://en.wikipedia.org/wiki/Edson_Hendricks
SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references from
Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Article on new mainframe use Newsgroups: comp.arch Date: Fri, 16 Aug 2024 13:24:30 -1000Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
... pg11
https://archive.computerhistory.org/resources/access/text/2013/05/102658267-05-01-acc.pdf
a little more here
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-DB2.html
note there were a few System/R "joint studies" ... one was BofA getting 60 vm4341s for distributed operation (Jim wanted me to take it over when he leaves for Tandem).
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: RS/6000, PowerPC, AS/400 Date: 16 Aug, 2024 Blog: Facebook801/RISC ROMP (Research & OPD) chip was originally going to be used (by Austin) for DISPLAYWRITER followon. Rochester was going to use a different 801/RISC chip for the S34/S36/S38 follow-on, AS/400. And Endicott was going to use 801/RISC ILIAD chip for 4331/4341 follow-ons, 4361/4381. For various reasons Rochester and Endicott dropped back to CISC chips. When DISPLAYWRITER followon (lots of market moves to IBM/PC) was canceled, Austin decided to pivot to the unix workstation market and got the company that done unix port to IBM/PC to do one for ROMP, becomes PC/RT and AIX (austin unix?).
Austin then starts on RIOS chipset that becomes RS/6000 & POWER. Then executive we reported to doing IBM's HA/CMP goes over to head up Somerset (AIM, Apple, IBM, Motorola) single chip for PowerPC ... which goes through a number of variations and is used by Rochester to move AS/400 to 801/RISC.
AIM/Somerset
https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/PowerPC_600
https://wiki.preterhuman.net/The_Somerset_Design_Center
AS/400
https://en.wikipedia.org/wiki/IBM_AS/400
Fort Knox to converge myriad of CISC microprocessors to 801/RISC
https://en.wikipedia.org/wiki/IBM_AS/400#Fort_Knox
AS/400 move to PowerPC
https://en.wikipedia.org/wiki/IBM_AS/400#The_move_to_PowerPC
We originally got HA/6000 for the NYTimes to move their newspaper
system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when I
start doing technical/scientific cluster scale-up with national labs
and commercial cluster scale-up with RDBMS (Oracle, Sybase, Informix,
Ingres) vendors that had VAXCluster support in same source base with
UNIX. Early Jan1992, in meeting with Oracle CEO, AWD/Hester tells them
that we will have 16processor clusters mid92 and 128processor clusters
ye92. However, a couple weeks later, cluster scale-up is transferred
for announce as IBM Supercomputer (for technical/scientific *ONLY*)
and we are told we can't work on anything with more than four
processor (we leave IBM a few months later).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: RS/6000, PowerPC, AS/400 Date: 16 Aug, 2024 Blog: Facebookre:
ROMP for displaywriter follow-on pivots to unix workstation market for
pc/rt ... needing floating point
https://en.wikipedia.org/wiki/IBM_RT_PC
The Standard Processor Card or 032 card had a 5.88 MHz clock rate (170
ns cycle time), 1 MB of standard memory (expandable via 1, 2, or 4 MB
memory boards). It could be accompanied by an optional Floating-Point
Accelerator (FPA) board, which contained a 10 MHz National
Semiconductor NS32081 floating point coprocessor. This processor card
was used in the original RT PC models (010, 020, 025, and A25)
announced on January 21, 1986.[3][4]
... snip ...
RIOS chipset (RS/6000) designed for unix workstation
https://en.wikipedia.org/wiki/IBM_POWER_architecture
trivia: I also had HSDT project since early 80s, T1 and faster
computer links (both satellite and terrestrial). One of the first T1
satellite links was between IBM Los Gatos Lab and Clementi's
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston ... had boatload of Floating Point Systems
boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems
trivia: in the 60s, IBM sold 2701 controllers that supported T1 ... then apparently problems with SNA/VTAM introduced in the 70s, capped controllers at 56kbits (I had various battles with the communication group over this).
Then got a custom made Ku-band TDMA satellite system (with transponder on SBS4), 4.5M dishes in Los Gatos and Yorktown and 7M dish in Austin. San Jose got an EVE (VLSI logic hardware verification) and ran a tail circuit between Los Gatos and the machine room EVE ... claim is Austin being able to use the San Jose EVE, help bring the RIOS chipset in year early
trivia: RS6000/POWER didn't have cache consistency for shared-memory,
tightly-coupled multiprocessor, so to have scale-up had to do cluster
operation ... POWER/PC got it by adopting the Motorola (RISC) 88k.
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-system: 2016MIPs, 128-system: 16,128MIPS
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
(my) desk ornament (original) 6chip RIOS/POWER, 150MIOPS, 60FLOPS, 7 million transistors
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: RS/6000, PowerPC, AS/400 Date: 16 Aug, 2024 Blog: Facebookre:
trivia: Austin AWD PC/RT did some of their own cards for the PC/RT (had 16bit PC/AT bus) including 4mbit token-ring. However for RS/6000 with microchannel, AWD was told they couldn't do their own (microchannel) cards but had to use the (heavily performance kneecapped) PS2 cards (the PS2 microchannel 16mbit token-ring card had lower card throughput than PC/RT 4mbit token-ring card, joke was that RS/6000 16mbit T/R server would have lower throughput than a PC/RT 4mbit token-ring server). As a partial work around to corporate politics, AWD came out with the 730 graphics workstation (with a vmebus so it could get a high-performance vmebus graphics card, side-stepping corporate restrictions).
when I joined IBM, one of my hobbies was enhanced production operating
systems for internal datacenters, including IBM world-wide, online
sales&market support HONE, which was long time customer. In the first
part of the 70s, IBM had the "Future System" effort that was
completely different from 370s and was going to completely replace it
(internal politics was killing off 370s efforts, the lack of new 370
products during the period is credited with giving the clone 370
makers, including Amdahl, their market foothold). I continued to work
on 360&370 all during the FS period including periodically ridiculing
what they were doing (which wasn't exactly career enhancing
activity). After FS finally implodes there was mad rush to get stuff
back into the 370 product pipelines, including kicking off quick&dirty
3033&3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
Note the communication group was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm (motivation to performance kneecap all the PS2 microchannel cards). New Almaden bldg had been heavily provisioned with CAT wiring, presumably for 16mbit token-ring .... but found $69 10mbit ethernet (CAT wiring) cards had higher card throughput than $800 16mbit token-ring cards ... and 10mbit ethernet had higher aggregate LAN throughput and lower latency than 16mbit token-ring.
1988, IBM branch office asked if I could lend help LLNL (national lab) with standardizing some serial stuff they were playing, which quickly becomes fibre-channel standard ("FCS", including some stuff I had done for STL in 1980; initially 1gbit, full-duplex, 200mbyes/sec aggregate). Then POK finally release some stuff (that they been playing with for at least a decade) with ES/9000 as ESCON (when it was already obsolete, 17mbyes/sec).
trivia: senior disk engineer in the late 80s, got a talk scheduled at the annual, internal, world-wide communication group conference supposedly on 3174 performance, but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. They were seeing a drop in sales with data fleeing datacenters for more distributed computing friendly platforms and had come up with a number of solutions. However they were all being vetoed by the communication group with their corporate strategic "ownership" for everything that crossed datacenter walls. Disk division software executive partial workaround was to invest in distributed computing startups that would use IBM disks, he would periodically ask us to drop in on his investments to see if we could provide any help.
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
communication group protecting their dumb terminal paradigm
https://www.garlic.com/~lynn/subnetwork.html#terminal
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
FCS and/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Amdahl Date: 17 Aug, 2024 Blog: FacebookAmdahl had won the battle to make ACS, 360-compatible. Folklore is then executives were afraid it would advance the state of the art too fast and IBM would loose control of the market, and it is killed, Amdahl leaves shortly later (lists some features that show up more than two decades later with ES/9000)
Not long after Amdahl leaves, IBM has Future System effort, completely
different from 370 and was going to completely replace 370; internal
politics during FS was killing off 370 projects, the dearth of new
370s during FS, is credited with giving the clone 370s makers
(including Amdahl) their market foothold.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
one of the last nails in the FS coffin was IBM Houston Science Center analysis if 370/195 applications were redone for a FS machine made out of the fastest hardware technology available, they would have throughput of 370/145 (about 30 times slowdown). When "FS" implodes there is mad rush to get stuff back into the 370 product pipelines, including quick&dirty 3033&3081 in parallel
as memo125, 3081 has huge increase in number of circuits which could be considered motivation for TCMs ... cramming all those circuits in smaller volume, requiring liquid cooling. 3081 was going to be multiprocessor only and aggregate of two processor 3081D supposedly was benchmarking slower than 3033MP ... and IBM fairly quickly doubles the processor cache sizes for 3081K ... increasing two processor aggregate MIPS rate to approx. same as Amdahl single processor. Also IBM documents listed MVS multiprocessor throughput was (only) 1.2-1.5 times single processor (because of its multiprocessor overhead) ... so MVS 3081K would have much lower throughput than Amdahl single processor (even though approx same MIPS).
Note during FS, I continued to work on 360/370 and would periodically
ridicule the FS activity (not exactly career enhancing). Also after
joining IBM, one of my hobbies was enhanced production operating
systems for internal systems, including world-wide, online
sales&marketing support HONE was long time customer. Amdahl had
been selling into scientific/technical/univ. market but had yet to
break into the true-blue commercial market. I was also participating
in user group meetings and dropping by customers. The director of one
of the largest (financial industry) IBM datacenters on the east coast
liked me to stop by and talk technology. At some point the IBM branch
manager horribly offended the customer and in retribution, the
customer ordered an Amdahl system (lone Amdahl in large sea of IBM,
but would be 1st in true-blue commercial). I was asked to go onsite
for 6-12 months (to help obfuscate why the customer was ordering
Amdahl). I talk it over with the customer and then decline IBM's
offer. I was then told that the branch manager was good sailing buddy
of IBM CEO and if I didn't do this, I could forget career, promotions,
raises. Some more detail
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Note in the morph in CP67/CMS -> VM370/CMS, they simplify and/or drop features (including multiprocessor). I initially convert a lot of CP67 stuff to VM370R2 for my first VM-based internal CSC/VM. Then I add multiprocessor support to a VM370R3-based CSC/VM, originally for consolidated US HONE operation (large single-system-image, shared DASD, with load-balance and fall-over) complex up in Palo Alto (trivia: when FACEBOOK 1st moves into silicon valley, it was into new bldg built next door to the former consolidated US HONE datacenter) ... so they could add 2nd processor to each system. Then with the implosion of FS, I was sucked into helping with a 16-processor multiprocessor system and we con the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168-logic to 20% faster chips). Everybody thought it was great until somebody tells the head of POK it could be decades before POK favorite son operating system (MVS) had (effective) 16-processor support (POK doesn't ship 16-processor system until after the turn of the century) ... aka MVS two-processor support only getting 1.2-1.5 throughput of single processor (and overhead increased with number of processors). The head of POK directs some of us to never visit POK again, and 3033 processor engineers told heads down and no distractions.
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM Science Center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The IBM Way by Buck Rogers Date: 17 Aug, 2024 Blog: FacebookThe IBM Way by Buck Rogers
Learson trying (& failed) to block the bureaucrats, careerists, and
MBAs from destroying Watson culture and legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
and 20yrs later IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" in
preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).
periodic repost: senior disk engineer in the late 80s, got a talk scheduled at the annual, internal, world-wide communication group conference supposedly on 3174 performance, but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. They were seeing a drop in sales with data fleeing datacenters for more distributed computing friendly platforms and had come up with a number of solutions. However they were all being vetoed by the communication group with their corporate strategic "ownership" for everything that crossed datacenter walls (and fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm). Disk division software executive partial workaround was to invest in distributed computing startups that would use IBM disks, he would periodically ask us to drop in on his investments to see if we could provide any help.
communication group stranglehold on datacenters wasn't just disks and a couple years later, IBM has one of the largest losses in history of US companies. IBM was then being reorged into the 13 "baby blues" in preparation for breaking up the company. We had left IBM but get a call from the bowels of Armonk asking if could help with the breakup. Before we get started, the board hires former AMEX president as CEO, who (somewhat) reverses the breakup (but it is not long before the disk division is gone).
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
AMEX, Private Equity, IBM related Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
communication group trying to preserve dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The IBM Way by Buck Rogers Date: 18 Aug, 2024 Blog: Facebookre:
I took two credit hr intro to fortran/computers and at the end of the semester was hired to rewrite 1401 MPIO for 360/30. Univ had 709/1401 (1401 unix record front-end for 709) and was getting 360/67 for tss/360 replacing 709/1401 ... 360/30 temporarily replacing 1401 (gettin 360/30) until 360/67 was available. Within a year of taking intro class, 360/67 arrived and I was hired fulltime responsible for os/360 (tss/360 never came to production). Then Science Center came out to install CP67 (3rd after Cambridge itself and MIT Lincoln Labs) and I mostly got to play with it during my dedicated weekend window.
Before I graduate, I was hired fulltime into small group in Boeing CFO office to help with formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter largest in world with couple hundred million in IBM gear, 360/65s arriving faster than they could be installed, boxes constantly staged in hallways around machine room (somebody commented that Boeing order 360/65s like others ordered keypunches). Lots of politics between Renton director and Boeing CFO, who only had 360/30 up at Boeing field for payroll (although they enlarge the machine and install 360/67 for me to play with when I wasn't doing other stuff). When I graduate, I join IBM science center (instead of staying with Boeing CFO).
One of my hobbies after joining IBM was enhanced production operating systems for internal systems and HONE was long time customer. 23June1969 bundling announcement, starting to charge for (application) software (made case that kernel software was still free), SE services, maint, etc. SE trainee used to include as part of large group at customer site, but they couldn't figure out how not to charge for SE trainee time. HONE was then born, lots of CP67/CMS datacenters for branch office online access, SEs practicing with guest operating systems in CP67 virtual machines. The science center also ported APL\360 to CMS for CMS\APL and HONE started to offer CMS\APL-based sales&marketing apps which came to dominate all HONE activity (practicing with guest operating systems withered away).
Decision was made to add virtual memory to all 370s ... I was asked to track down that decision ... turns out MVT storage management was so bad that regions had to be specified four times larger than used. As a result a typical 1mbyte 370/165 only could run four regions concurrently, insufficient to keep system busy and justified. MVT going to 16mbyte virtual memory (VS2/SVS) could increase number of concurrent regions by factor of four (capped at 15, because of 4bit storage protect keys) with little or not paging (sort of like running MVT in CP67 16mbyte virtual machine). The morph of CP67 to VM370, simplified and/or dropped lots of features (including multiprocessor support).
Also early 70s, Future System effort, totally different from 370 and was to completely replacing 370 (internal politics was killing off 370 efforts, the lack of new 370s is credited with giving the clone 370 makers their market foothold, including Amdahl). I continued to work on 360&370 all during FS, including ridiculing FS (which wasn't exactly career enhancing). I start migrating lots of CP67 features to VM370R2 base for my internal CSC/VM ... and US HONE datacenters are consolidated in Palo Alto (and clone HONE systems start cropping up all over the world, some of my 1st overseas business trips) ... with single-system-image, loosely-coupled, shared DASD, load-balancing and fall-over across the complex. I then migrate CP67 SMP, tightly-coupled, multiprocessor support to a VM370R3-based CSC/VM, initially for US HONE so they can add a 2nd processor to each system.
Future System finally implodes and there is mad rush to get stuff back
into 370 product pipelines, including kicking off quick&dirty 3033 and
3081 efforts in parallel. FS (failing) significantly accelerated the
rise of the bureaucrats, careerists, and MBAs .... From Ferguson &
Morris, "Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
more detail
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
With the rise of the clone 370 makers, there is also the decision to
start charging for kernel software, and some of my CSC/VM is chosen
for initial guinea pig and I get to spend a lot of time with business
planners and lawyers on kernel software charging. The head of POK was
also in the processing of convincing corporate to kill the VM370
product, shutdown the development group and move all the people to POK
for MVS/XA. Endicott eventually manages to save the VM370 product
mission (for the mid-range) but has to recreate a development group
from scratch. When transition to charging for kenel software is
complete in the 80s, IBM starts the "OCO-wars" with customers, no
longer shipping source. Note TYMSHARE had started offering their
CMS-based online computer conferencing system free to (mainframe user
group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
in Aug1976 as VMSHARE, archives.
http://vm.marist.edu/~vmshare
I had cut a deal with TYMSHARE to get monthly tape dump of all VMSHARE (and later PCSHARE) files, for putting up on internal network and systems (including HONE). One of the problems was with lawyers that were concerned that internal employees would be contaminated by unfiltered consumer information (especially if it differed from IBM corporate party line).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
23june1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The IBM Way by Buck Rogers Date: 18 Aug, 2024 Blog: Facebookre:
S/38 was simplified FS & the single-level-store (which somewhat had been inherited from TSS/360), started out with single disk, when more than one disk, it was still single filesystem that could result in a file scatter allocate across multiple disks (as a result all disks had to be backed up as single entity and single disk failure required replacing disk and restoring the whole filesystem). Also there was plenty of performance head room between throughput required for S/38 and available hardware (not like FS, where 370/195 apps redone for FS machine made out of fastest available technology ... would have throughput of 370/145 ... about 30times slowdown).
trivia: my brother was regional apple marketing rep and i could get invited to business dinners when he came into town and got to argue MAC design with MAC develops (even before MAC was announced). He also figured out how to remote dial into the S/38 that ran Apple to track manufacturing and delivery schedules.
Because how traumatic a single disk failure was for S/38 multi-disk
filesystem, S/38 was early adopter of RAID.
https://en.wikipedia.org/wiki/RAID#History
In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was
subsequently named RAID 4.[5]
... snip ...
trivia: when I transfer out to SJR in 2nd half of 70s, I got to wander around silicon valley datacenters ... including 14&15 (disk engineering and product test) across the street. They were running 7x24, pre-scheduled stand-alone testing and mentioned that they had recently tried MVS, but it had 15min mean-time-between-failure (in that environment) requiring manual MVS re-ipl. I offer to rewrite I/O supervisor to make it bullet-proof and never fail so they could do any amount of concurrent on-demand testing, greatly improving productivity. I then write an internal-only research report on the I/O integrity work and happened to mention the MVS 15min MTBF ... bringing the wrath of the MVS organization down on my head.
other trivia: Part of periodically ridiculing FS was how they were doing single-level store. I had implemented a CP67/CMS page-mapped filesystem (and converted to VM370/CMS) and claimed I had learned what not to do from TSS/360. The implosion of FS gave page-mapped oriented filesystems such a bad name that I could never get mine released to customers (although it shipped internally in my internal system ... including production systems in Rochester). It wasn't just me, Simpson (of HASP spooling fame) had done MFT virtual memory support and paged mapped filesystem that he called RASP ... and made no headway ... leaving IBM for Amdahl ... and rewrote it from scratch in "clean room" conditions. IBM tried to sue that it contained IBM code, however the auditors were only able to find a couple very short code sequences that resembled any IBM code.
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
page mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
getting to play disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
internal enhanced system posts
https://www.garlic.com/~lynn/submisc.html#cscvm
Recent FS posts specifically mentioning 30 times slowdown
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2024e.html#37 Gene Amdahl
https://www.garlic.com/~lynn/2024e.html#18 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024d.html#29 Future System and S/38
https://www.garlic.com/~lynn/2024d.html#1 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024b.html#20 DB2, Spooling, Virtual Memory
https://www.garlic.com/~lynn/2024.html#103 Multicians
https://www.garlic.com/~lynn/2024.html#94 MVS SRM
https://www.garlic.com/~lynn/2024.html#86 RS/6000 Mainframe
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Public Facebook Mainframe Group Date: 18 Aug, 2024 Blog: Facebookre:
the 1401 has both 2540 card reader/punch and 1403n1 that were moved over to 360/30 control unit ... univ shutdown datacenter on weekends and I got the whole place ... they gave me bunch of hardware and software manual and I got to design my own system ... within few weeks had 2000 card assembler program.
one of the 1st things I learned when coming in sat. morning was to clean tape drives, 2540 (disassemble, clean, put back together), 1403, etc. Sometimes production had finished early and everything was powered off and 360/30 wouldn't complete power-on. I learned to put all the controllers into CE-mode, power on 360/30, power on controllers individually and take out of CE-mode.
other comments/replies in recent "Buck Roger" post
https://www.garlic.com/~lynn/2024e.html#66 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#68 The IBM Way by Buck Rogers
trivia: I was introduced to John Boyd in early 80s and would sponsor
his briefings at IBM. He had redone original F15 design, cutting
weight nearly in half, then responsible for YF16&YF17 (becomes
F16&F18).
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
One of my siblings was 30yr Navy civilian employee, mostly at Widbey,
when retired was running Growler supply&maint
https://en.wikipedia.org/wiki/Boeing_EA-18G_Growler
I would periodically visit ... also would repeat a lot of Boyd stories and got a lot of EA-18 tshirts and misc other carrier souvenirs. The F35 forces where publicly saying they obsolete F15, F16, F18 (and growlers, because F35 stealth was so good no longer needed radar jamming) and A10. Boyd had also helped Pierre Sprey with A10 and when the F35 forces started attacking Sprey ... I research non-classified information for a couple weeks and did a public piece on how to defeat F35 stealth. Shortly later they publicly announce new jammer pods for EA-18.
Boyd posts and web URLs:
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The Rise And Fall Of Unix Newsgroups: alt.folklore.computers Date: Tue, 20 Aug 2024 07:21:17 -1000Marco Moock <mm+usenet-es@dorfdsl.de> writes:
besides adapting software to large scale cluster model ... around the turn of the century, the cloud operators were saying they were assembling their own commodity computers at 1/3rd the price of vendor systems (adopting custom software and system designs ... but then heavily influencing component design and starting to participate in chip design).
around end of 1st decade of the century, chip vendor press was saying that they were shipping at least half their product direct to the (cluster) megadatacenters. Large cloud operators having scores of megadatacenters around the world, each megadatacenter with hundreds of thousands of blade systems and tens of millions of processors, enormous automation, a megadatacenter staffed with only 70-80 people.
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The IBM Way by Buck Rogers Date: 20 Aug, 2024 Blog: Facebookre:
During the 80s, the communication group was fiercely fighting off client/server and distributed computing (trying to preserve their dumb terminal paradigm). They were opposing release of mainframe TCP/IP and when that got reversed, they change their strategy, saying that since the communication group had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 processor. I then add support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, I get sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed).
In the early 80s, I got HSDT project, T1 and faster computer links, both terrestrial and satellite ... and having lots of battles with communication group. Note IBM sold 2701 controllers in 60s supporting T1, however communication group in 70s with SNA/VTAM issues, capped links at 56kbits/sec.
Late 80s, senior disk engineer got a talk scheduled at the annual, internal, world-wide communication group conference supposedly on 3174 performance, but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. They were seeing a drop in sales with data fleeing datacenters for more distributed computing friendly platforms and had come up with a number of solutions. However they were all being vetoed by the communication group with their corporate strategic "ownership" for everything that crossed datacenter walls. Disk division software executive partial workaround was to invest in distributed computing startups that would use IBM disks, he would periodically ask us to drop in on his investments to see if we could provide any help.
Communication group stranglehold on datacenters wasn't just disks, and
couple years later, IBM has one of the largest losses in the history
of US companies and was being reorged into the 13 "baby blues" in
preparation for breaking up the company (20yrs after Learson fails to
preserve Watson culture/legacy)
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup of the company. Before we get
started, the board brings in the former president of AMEX that
(somewhat) reverses the breakup (although it wasn't long before the
disk division is gone).
The communication group had also severely performance kneecapped microchannel PS2 cards. The AWD group had done their own cards for PC/RT (had a PC/AT 16bit bus). Then for RS/6000 w/microchannel, AWD was told they couldn't do their own cards but had to use standard PS2 cards. Example was the $800 16mbit token-ring card had lower card throughput than the PC/RT 4mbit token-ring card (competing with $69 Ethernet cards supporting 8.5mbits/sec sustained). As partial countermeasure to company politics, AWD came out with RS/6000M730 with a vmebus ... so they could use workstation industry VMEbus performance display cards.
In the 90s, communication group hired a silicon valley contractor to implement TCP/IP directly in VTAM, what he initially demo'ed had TCP running much faster than LU6.2. He was then told that everybody "knows" LU6.2 is much faster than a "proper" TCP/IP implementation and they would only pay for a "proper" implementation.
trivia: late 80s univ studying mainframe VTAM implementation LU6.2 had 160k instruction pathlength (and 15 buffer copies) compared to unix/bsd (4.3 tahoe/reno) TCP had a 5k instruction pathlength (and 5 buffer copies).
Communication group tried block me on XTP technical advisory board where we did card supporting scatter/gather (aka mainframe chained data) I/O with direct data/packet transfer with user space (no buffer copy). Slight modification to CRC in TCP header protocol, moving the CRC to trailer protocol so the card could perform CRC calculation as the data streamed through the card.
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
communication group trying to preserve dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
801/RISC, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, Power/PC posts
https://www.garlic.com/~lynn/subtopic.html#801
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The Rise And Fall Of Unix Newsgroups: alt.folklore.computers Date: Tue, 20 Aug 2024 12:04:36 -1000scott@slp53.sl.home (Scott Lurndal) writes:
.... free, unencumbered, availability of source ... problems could
be considered behind the UNIX Wars, SUN/ATT against the rest of
rest of the unixes and OSF:
https://en.wikipedia.org/wiki/Open_Software_Foundation
The Open Software Foundation (OSF) was a not-for-profit industry
consortium for creating an open standard for an implementation of the
operating system Unix. It was formed in 1988[1] and merged with X/Open
in 1996, to become The Open Group.[2]
...snip ...
https://en.wikipedia.org/wiki/Open_Software_Foundation#History
The organization was first proposed by Armando Stettner of Digital
Equipment Corporation (DEC) at an invitation-only meeting hosted by DEC
for several Unix system vendors in January 1988 (called the "Hamilton
Group", since the meeting was held at DEC's offices on Palo Alto's
Hamilton Avenue).[3] It was intended as an organization for joint
development, mostly in response to a perceived threat of "merged UNIX
system" efforts by AT&T Corporation and Sun Microsystems.
... snip ...
in the mean time, much of the cluster computing forces had latched on to Linux. One the differences was the vendors were viewing software&computers as profit ... while the cluster computing forces viewed software&computers as cost.
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The IBM Way by Buck Rogers Date: 20 Aug, 2024 Blog: Facebookre:
When IBM's new CEO was still President of AMEX .... he had enormous mainframe datacenters reporting to him ... doing AMEX electronic card transactions, but also a credit-card outsourcing business that handled large percentage of both acquiring merchants and issuing card holders in the US. In 1992 (same time IBM was having one of the largest losses in the history of US companies), AMEX spins off those datacenters and outsourcing business in the largest IPO up until that time as "First Data Corporation" (in part because banks felt that they could be at competitive disadvantage with AMEX doing both their own cards and large percentage of all bank cards).
trivia: AMEX was in competition with KKR for LBO take-over of RJR and KKR wins, when KKR runs into trouble with RJR, it hires away AMEX president to help (who is subsequently hired as IBM's CEO). 15yrs after AMEX FDC spin-off (in largest IPO up until that time), KKR does a LBO of FDC (in largest LBO up until that time), since offloading it to FISERV.
Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
Private Equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM San Jose Date: 20 Aug, 2024 Blog: Facebook70s and 80s some of us would gather on Friday at local San Jose watering holes (for 1st few years after Eric's opened up across cottle from plant site, they had my name posted on the door to the back room) discussing things like how to get majority of IBM employees that were computer illiterate using computers. Also there was difficulty of getting 3270 terminal justifications, requiring VP sign-off, into fall business plan (even after having done business case showing 3270 terminals 3yr capital depreciation was approx same as monthly desk business phone)
Then there was rapidly spreading rumor that corporate executives had gotten keyboards for email. then all of a sudden saw some of the annual 3270 terminal allocation getting diverted to desks of executives and management (even tho almost none of them were ever actually going to do their own email or use the terminals).
saw a number of rounds of this in San Jose ... when new PCs with terminal emulation came out ... the managers and executives needed the latest status symbol ... whether they used them or not. There were several examples where project justified PS2/486 with (larger) 8514 screens for "real" work & development ... were rerouted to some managers' desk to spend life idle with PROFS menu being burned into the screen.
Harken back to CEO Learson falling to block the bureaucrats,
careerists, and MBAs from destroying the Watson culture&legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
posts getting to play disk engineer in bldgs14&15 (disk engineering
and product test)
https://www.garlic.com/~lynn/subtopic.html#disk
posts mentioning Eric's on fridays
https://www.garlic.com/~lynn/2014b.html#89 Royal Pardon For Turing
https://www.garlic.com/~lynn/2010j.html#76 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2009r.html#62 some '83 references to boyd
https://www.garlic.com/~lynn/2008n.html#51 Baudot code direct to computers?
https://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2003o.html#7 An informed populace
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM San Jose Date: 21 Aug, 2024 Blog: Facebookre:
I did CMSBACK in late 1970s for internal datacenters ... which went
through a number of internal releases, then decade some later, clients
were done for PCs/workstations and eventually released as WDSF
... then morphing into ADSM
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager
I had started with a modified version of VMFPLC, increasing maximum blocking size, and merging the separate FST record with 1st data block (cutting overhead for smaller files because of tape inter-record gap). I earlier done page-mapped filesystem for CP67/CMS and then ported to VM370/CMS ... so I also made sure that buffers were page aligned.
Late 80s, senor disk engineer got talk scheduled at world-wide, internal, annual communication group conference supposedly on 3174 performance, but open his talk with statement that the communication group was going to be responsible for the demise of the disk divison. The disk division was seeing drop in disk sales with data fleeing datacenters to more distributed computing friendly platforms. The disk division had done a number of things to address the situation, but they were constantly being vetoed by the communication group (had corporation strategic responsibility for everything that crossed the datacenter walls and were fiercely fighting off client/server and distributed computing). As partial countermeausre the GPD/AdStar software executive was investing in distributing computing startups that would use IBM disks (and he would periodically ask us to stop by his investments to see if we could help).
At the time, we were doing HA/CMP, started out HA/6000 originally for
the NYTimes to move their newspaper system (ATEX) from VAXCluster to
RS/6000. I rename it HA/CMP when start doing cluster scale-up with
national labs and commercial cluster scale-up with RDBMS vendors (that
had vaxcluster support in same source base with Unix, Oracle, Sybase,
Informix, Inges). Also working with LLNL on their Cray filesystem
LINCS to HA/CMP. One of the GPD/Adstar software executive investments
was NCAR's Mesa Archival (their archive&filesystem) and we were also
helping them port to HA/CMP.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
backup/archive posts
https://www.garlic.com/~lynn/submain.html#backup
cms page-mapped filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap
communication group fighting off client/sever and distributed
computing posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
some posts mentioning cmsback, wdsf, adsm
https://www.garlic.com/~lynn/2024b.html#7 IBM Tapes
https://www.garlic.com/~lynn/2023g.html#32 Storage Management
https://www.garlic.com/~lynn/2023e.html#98 Mainframe Tapes
https://www.garlic.com/~lynn/2023e.html#81 Storage Management
https://www.garlic.com/~lynn/2023c.html#55 IBM VM/370
https://www.garlic.com/~lynn/2022c.html#85 VMworkshop.og 2022
https://www.garlic.com/~lynn/2021j.html#36 Programming Languages in IBM
https://www.garlic.com/~lynn/2021j.html#22 IBM 3420 Tape
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2021.html#26 CMSBACK, ADSM, TSM
https://www.garlic.com/~lynn/2018d.html#39 IBM downturn
https://www.garlic.com/~lynn/2018d.html#9 Hell is ... ?
https://www.garlic.com/~lynn/2018.html#35 AW: Re: Number of Cylinders per Volume
https://www.garlic.com/~lynn/2017k.html#34 Bad History
https://www.garlic.com/~lynn/2017g.html#37 CMSBACK
https://www.garlic.com/~lynn/2016e.html#88 E.R. Burroughs
https://www.garlic.com/~lynn/2016.html#2 History question - In what year did IBM first release its DF/DSS backup & restore product?
https://www.garlic.com/~lynn/2014i.html#79 IBM Programmer Aptitude Test
https://www.garlic.com/~lynn/2014i.html#58 How Comp-Sci went from passing fad to must have major
https://www.garlic.com/~lynn/2014b.html#92 write rings
https://www.garlic.com/~lynn/2013d.html#11 relative mainframe speeds, was What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012p.html#61 What is holding back cloud adoption?
https://www.garlic.com/~lynn/2012k.html#46 Slackware
https://www.garlic.com/~lynn/2012b.html#73 Tape vs DASD - Speed/time/CPU utilization
https://www.garlic.com/~lynn/2011j.html#6 At least two decades back, some gurus predicted
https://www.garlic.com/~lynn/2011h.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2010l.html#43 PROP instead of POPS, PoO, et al
https://www.garlic.com/~lynn/2010l.html#18 Old EMAIL Index
https://www.garlic.com/~lynn/2010d.html#67 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009f.html#59 Backup and Restore Manager for z/VM
https://www.garlic.com/~lynn/2006t.html#20 Why these original FORTRAN quirks?; Now : Programming practices
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Death of the Engineer CEO Date: 22 Aug, 2024 Blog: FacebookThe Death of the Engineer CEO: Evidence that Short-Termism and Financialization Had Become Ascendant
Boeing and the Dark Age of American Manufacturing. Somewhere along the
line, the plane maker lost interest in making its own planes. Can it
rediscover its engineering soul?
https://www.theatlantic.com/ideas/archive/2024/04/boeing-corporate-america-manufacturing/678137/
Did Stock Buybacks Knock the Bolts Out of Boeing?
https://lesleopold.substack.com/p/did-stock-buybacks-knock-the-bolts
Since 2013, the Boeing Corporation initiated seven annual stock
buybacks. Much of Boeing's stock is owned by large investment firms
which demand the company buy back its shares. When Boeing makes
repurchases, the price of its stock is jacked up, which is a quick and
easy way to move money into the investment firms' purse. Boeing's
management also enjoys the boost in price, since nearly all of their
executive compensation comes from stock incentives. When the stock
goes up via repurchases, they get richer, even though Boeing isn't
making any more money.
... snip ...
2016, one of the "The Boeing Century" articles was about how the
merger with MD has nearly taken down Boeing and may yet still
(infusion of military industrial complex culture into commercial
operation)
https://issuu.com/pnwmarketplace/docs/i20160708144953115
The Coming Boeing Bailout?
https://mattstoller.substack.com/p/the-coming-boeing-bailout
Unlike Boeing, McDonnell Douglas was run by financiers rather than
engineers. And though Boeing was the buyer, McDonnell Douglas
executives somehow took power in what analysts started calling a
"reverse takeover." The joke in Seattle was, "McDonnell Douglas bought
Boeing with Boeing's money."
... snip ...
Crash Course
https://newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution
Sorscher had spent the early aughts campaigning to preserve the
company's estimable engineering legacy. He had mountains of evidence
to support his position, mostly acquired via Boeing's 1997 acquisition
of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft
plant in Long Beach and a CEO who liked to use what he called the
"Hollywood model" for dealing with engineers: Hire them for a few
months when project deadlines are nigh, fire them when you need to
make numbers. In 2000, Boeing's engineers staged a 40-day strike over
the McDonnell deal's fallout; while they won major material
concessions from management, they lost the culture war. They also
inherited a notoriously dysfunctional product line from the
corner-cutting market gurus at McDonnell.
... snip ...
Boeing's travails show what's wrong with modern
capitalism. Deregulation means a company once run by engineers is now
in the thrall of financiers and its stock remains high even as its
planes fall from the sky
https://www.theguardian.com/commentisfree/2019/sep/11/boeing-capitalism-deregulation
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
some past posts mentioning "McDonnell Douglas bought Boeing"
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024.html#56 Did Stock Buybacks Knock the Bolts Out of Boeing?
https://www.garlic.com/~lynn/2023g.html#104 More IBM Downfall
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022g.html#64 Massachusetts, Boeing
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021k.html#69 'Flying Blind' Review: Downward Trajectory
https://www.garlic.com/~lynn/2021k.html#40 Boeing Built an Unsafe Plane, and Blamed the Pilots When It Crashed
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021e.html#87 Congress demands records from Boeing to investigate lapses in production quality
https://www.garlic.com/~lynn/2021b.html#70 Boeing CEO Said Board Moved Quickly on MAX Safety; New Details Suggest Otherwise
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT: Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019e.html#39 Crash Course
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Death of the Engineer CEO Date: 22 Aug, 2024 Blog: Facebookre:
... IBM a financial engineering company, Stockman; The Great
Deformation: The Corruption of Capitalism in America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback contraption on
steroids. During the five years ending in fiscal 2011, the company
spent a staggering $67 billion repurchasing its own shares, a figure
that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate
Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.
... snip ...
(2016) After Forking Out $110 Billion on Stock Buybacks, IBM Shifts
Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket
more financial engineering company
IBM deliberately misclassified mainframe sales to enrich execs,
lawsuit claims. Lawsuit accuses Big Blue of cheating investors by
shifting systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO
Ginni Rometty propped up its stock price and deceived shareholders by
misclassifying revenues from its non-strategic mainframe business -
and moving said sales to its strategic business segments - in
violation of securities regulations.
... snip ...
Learson trying to block (and failed) the bureaucrats, careerists and
MBAs from destroying Watson cutlture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM DB2 Date: 22 Aug, 2024 Blog: LinkedinOriginal sql/relational was System/R during the 70s at IBM San Jose Research. I worked with Jim Gray and Vera Watson on some of it. It was then possible to do tech transfer to Endicott (for SQL/DS) "under the radar" while the company was preoccupied with the next great DBMS: "EAGLE". When "EAGLE" eventually implodes there was a request for how fast could System/R be ported to MVS, which is eventually released as DB2 (originally for decision support only).
... pg11
https://archive.computerhistory.org/resources/access/text/2013/05/102658267-05-01-acc.pdf
a little more here
https://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-DB2.html
note there were a few System/R "joint studies" ... one was BofA getting 60 vm4341s for distributed operation (Jim wanted me to take it over when he leaves for Tandem).
We had got HA/6000 project, originally to move NYTimes newspaper
system (ATEX) off VAXCluster to RS/6000. I rename it HA/CMP when I
start doing technical/scientific cluster scale-up with national labs
and commercial cluster scale-up with RDBMS vendors that had VAXCluster
in same source base with UNIX (Oracle, Sybase, Informix,
Ingres). There was a simplified portable SQL/RDBMS (aka "Shelby")
being developed for OS/2 (but was years away from running
with/supporting high performance cluster UNIX)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Then cluster scale-up (planning 128-system clusters by ye1992) is
transferred for announce as IBM Supercomputer (for
technical/scientific *only*) and we are told we couldn't work with
anything that has more than four processors (we leave IBM a few months
later). Note: Mainframe DB2 were complaining that if we were allowed
to continue, it would be years ahead of them. 1993 RS/6000 compared to
largest mainframe
ES/9000-982 : 408MIPS, 51MIPS/processor
RS6000/990 : 126MIPS/processor; 16-system: 2016MIPs, 128-system: 16,128MIPS
Original SQL/Relational posts
https://www.garlic.com/~lynn/submain.html#systemr
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, power, power/px
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: NSFNET Date: 23 Aug, 2024 Blog: FacebookEarly 80s, got HSDT project, T1 and faster computer links, both satellite and terrestrial. One of the HSDT 1st satellite links was T1 circuit (over IBM's SBS C-band satellite system, 10m dishes) between IBM Los Gatos lab and Clementi's
Also working with NSF Director and was suppose to get $20M to interconnect the NSF Supercomputer Centers. We gave presentations at a few centers, including Berkeley (UC got a large NSF grant for supercomputer center, supposedly for Berkeley, but the story was the Regent's bldg plan had San Diego getting the next new bldg and it becomes the UCSD supercomputer center instead). In part because of the Berkeley presentations, in 1983 was asked if I could help with the Berkeley 10M telescope project (gets large grant from Keck foundation and it becomes Keck Observatory). They were also working on converting from film to CCDs and wanted to do remote viewing from the mainland.
Then congress cuts the budget, some other things happened and
eventually a RFP was released (in part based on what we already had
running), From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
... IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies, but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
The RFP called for T1 network, but the (IBM AWD) PC/RT (based router) links were 440kbits/sec (not T1) and they put in T1 trunks with telco multiplexers (carrying multiple 440kbit links) to call it a T1 network. I periodically ridiculed that why don't they call it T5 network, since it was possible that some of the T1 trunks were in turn, carried over T5 trunks.
For the T3 upgrade, I was asked to be the red team and numerous people from several labs were the blue team. At final review, I presented 1st and then the blue team. A few minutes into the blue team presentation, the executive running the review, pounded on the table and said he would lay down in front of a garbage truck before he let any but the blue team proposal go forward (several people leave).
HSDT also got our own 3-node custom designed Ku-band TDMA satellite system (transponder on SBS4) with 4.5M dishes in IBM Los Gatos and IBM (Research) Yorktown and 7M dish in IBM (AWD) Austin. IBM San Jose got EVE (VLSI logic verification/simulation) and HSDT ran a circuit from Los Gatos to the computer room with the EVE. IBM Austin AWD claimed that being able to use the EVE in San Jose help bring the RIOS chip set (RS/6000) in a year early.
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
801/risc, iliad, romp, rios, pc/rt, power, power/pc
https://www.garlic.com/~lynn/subtopic.html#801
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM DB2 Date: 23 Aug, 2024 Blog: Linkedinre:
70s, IMS forces were criticizing System/R that it required twice the disk space (of IMS) for "index" and 4-5 times the I/O (traveling through the index). System/R counter was index significantly reduced the skill level and manual effort (compared to IMS). In the 80s, for RDBMS, disk sizes were significantly increasing and price/bit significantly dropping ... and system memory significantly increasing (caching cutting physical I/Os) ... while IMS human resources capped and cost increasing ... so started seeing big explosion in workstation and then PC RDBMS servers.
when Jim left for Tandem he also wanted me to pickup consulting for the IMS group.
trivia: when I 1st transferred to SJR, I got to wander around lots of datacenters in silicon valley, including bldg14&15 (disk engineering and product test) across the street. They were doing 7x/24, pre-scheduled, stand-alone testing and had mentioned that they had recently tried MVS, but it had 15min MTBF (in that environment, requiring manual re-ipl). I offer to redo I/O supervisor, making it bullet proof and never fail allowing any amount of concurrent, ondemand testing, greatly improving productivity. I then write an internal only, research report on the I/O integrity work happening to mention MVS 15min MTBF, bringing down the wrath of the MVS group on my head.
unrelated, I get coned into doing channel-extender support for IMS group. 1980, STL (now "SVL") was bursting at the seams and they were moving 300 from the IMS group to offsite bldg with dataprocessing back to STL datacenter. IMS group had tried "remote 3270" support but found the human factors were totally unacceptable. I do channel-extender support so they can place channel-attached 3270 controllers at the offsite bldg with no perceptible difference in human factors between in STL and offsite.
The hardware vendor then wants IBM to release my channel-extender support, but there is group in POK playing with some serial stuff and get it veto'ed (afraid that if it is in the market it would be harder to justify releasing their stuff). Then in 1988, IBM branch office asks if I can help LLNL (national lab) gets some serial stuff they are playing with standardized, which quickly becomes fiber-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate. Then the POK engineers get their stuff released in the 90s, with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec).
Then some POK engineers become involved with FCS and define a heavy-weight protocol that drastically reduces the native throughput which eventually ships as FICON. The latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON. About the same time a FCS is announced for E5-2600 server blade, claiming over million IOPS (two such FCS higher throughput than 104 FICON). IBM docs also recommend that SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... which would be more like 1.5M IOPS. Also no real CKD DASD has been made for decades, all being simulated on industry standard fixed-block disks.
System/R posts
https://www.garlic.com/~lynn/submain.html#systemr
getting to play disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON and FCS posts
https://www.garlic.com/~lynn/submisc.html#ficon
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM/PC Date: 23 Aug, 2024 Blog: FacebookNov1987, Boca sent email to Endicott saying that they were told that VM scheduling much better than OS/2, Endicott forwards it to Kingston, Kingston forwards it to me. I send Boca stuff on how to do scheduling (that I had originally done as undergraduate in the 60s for CP67/CMS ... percursor to VM370/CMS).
Note: Communication group was fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm (and leveraging their corporate straegic responsibility for everything that crossed datacenter walls). trivial example, AWD PC/RT (16bit AT/bus) did their own 4mbit token-ring card. Later for RS/6000 w/microchannel, AWD was told they couldn't do their own cards, but had to use PS2 cards (which had been heavily performance kneecapped by the communication group), example was the microchannel 16mbit token-ring card had lower throughput than PC/RT 16bit AT/bus 4mbit token-ring card (further aggravated by $800 16mbit token-ring card had significantly lower throughput than $69 Ethernet cards).
Late 80s, senior disk engineer got talk scheduled at annual, world-wide, internal communication group conference, supposedly on 3174 performance, but open the talk with statement that communication group was going to be responsible for the demise of the disk division ... they were seeing data fleeing datacenters (to more distributed computing friendly platforms) with drop in disk sales. They had come up with multiple solutions that were constantly vetoed by the communication group.
Communication group datacenter stranglehold wasn't just disks and a
couple years later, IBM had one of the largest losses in the history
of US companies and was being re-orged into the 13 "baby blues" in
preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup of the company. Before we get
started, the board brings in the former president of AMEX that
(somewhat) reverses the breakup (although it wasn't long before the
disk division is gone).
Learson trying (& failed) to block the bureaucrats, careerists, and
MBAs from destroying Watson culture and legacy; the loss/reorg
preparing for breakup then is 20yrs later
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
scheduling posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
communication group trying to preserve dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The Internet's 100 Oldest Dot-Com Domains Date: 25 Aug, 2024 Blog: FacebookThe Internet's 100 Oldest Dot-Com Domains
Person responsible for the science centers wide-area network in the
60s
https://en.wikipedia.org/wiki/Edson_Hendricks
... evolves into internal corporate network and technology also used
for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET
Account by one of the inventors of GML in 1969 at the science center
about original job was promoting the wide-area network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
my email was cambridg/wheeler. Much later, I had unix workstation at Interop88 in booth immediate right angles to SUN booth, Case was in the SUN booth with SNMP and con'ed him into installing it on my workstation. trivia: Sunday before the show opens, the floor nets were crashing with packet floods .... eventually got diagnosed ... provision about it shows up in RFC1122.
previous reference
https://www.garlic.com/~lynn/2024b.html#33 Internet
https://www.garlic.com/~lynn/2024b.html#34 Internet
https://www.garlic.com/~lynn/2024b.html#35 Internet
https://www.garlic.com/~lynn/2024b.html#36 Internet
https://www.garlic.com/~lynn/2024b.html#37 Internet
https://www.garlic.com/~lynn/2024b.html#38 Internet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
gml, sgml, html, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
interop88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Scheduler Date: 26 Aug, 2024 Blog: Facebook... well, not batch but there is the one I did for CP67 as undergraduate in the 60s. Then when I joined IBM one of my hobbies was enhanced production operating systems for internal datacenters. In the morph from CP67->VM370 lots of stuff was dropped/simplified; my dynamic adaptive resource manager, multiprocessing support, etc. I then was migrating internal systems from CP67 to VM370 starting with VM370R2. At the time SHARE was submitting requests that the "wheeler scheduler" be added to VM370.
With the 23July1969 unbundling announcement, they started charging for
(application) software (but made the case that kernel software should
still be free), SE services, maint., etc. In the early 70s, IBM
started the "Future System" project, completely different from 370 and
was going to completely replace it (I continued to work on 360/370 all
during FS, periodically ridiculing what they were doing); also
internal politics was killing off 370 efforts (the lack of new 370
during the period is credited with giving the clone 370 makers their
market foothold). Then when FS finally implodes, there is mad rush to
get stuff back into the 370 product pipelines, including kicking off
the quick&dirty 3033 & 3081 in parallel.
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
also with Future System politics having given rise to the 370 clone makers, the decision was made to transition to charging for all kernel software, starting with "charged for" addons and bunch of my stuff was selected for "guinea pig". VM370 had been given the 3letter prefix "DMK" for source modules ... and so I chose "DMKSTP" (from TV commercials at the time). Then somebody from corporate said he wouldn't sign off on release because I didn't have any manual tuning knobs, claiming manual tuning knobs at the time was "state of the art" (aka MVS with what look like hundreds of little manual twiddling things). Trying to explain "dynamic adaptive" fell on deaf ears, so I implemented a few manual turning knobs accessed with the "SRM" command (parody on MVS) ... detailed description and source for the formulas were published. The joke (from operation research) was the dynamic adaptive adjustments had more degrees of freedom than the manual tuning knobs (able to offset any manual adjustment).
15yrs later (after the 1st customer VM370 release), we were on
world-wide marketing trip for our IBM HA/CMP product (some customers
used it for deploying some number of workflow management products)
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
and in Hong Kong calling on large bank and a recently graduated new
IBM Marketing Rep asked me if I was wheeler of the
wheeler scheduler ... saying they had studied it at at
Univ. of Waterloo
trivia: same time as packaging for VM370 Resource Manager, had also got asked to work on 16-processor 370 and we con the 3033 processor engineers to work on it in their spare time. Everybody thought it was great until somebody tells head of POK that it could be decades before POK's favorite son operatin system ("MVS") had (effective) 16-processor support and some of us were invited to never visit POK again and 3033 processor engineers directed heads down and no distractions. Note: at the time MVS documentation said that MVS 2-processor throughput had 1.2-1.5 times the throughput of a single processor (and POK doesn't ship a 16-processor machine until after the turn of the century).
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
(internal) csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
23jun1969 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
dynamic adaptive resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
SMP, shared-memory multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM GPD/AdStaR Disk Division Date: 27 Aug, 2024 Blog: Facebooksenior disk engineer in the late 80s, got a talk scheduled at the annual, internal, world-wide communication group conference supposedly on 3174 performance, but opened the talk with statement that the communication group was going to be responsible for the demise of the disk division. They were seeing a drop in sales with data fleeing datacenters for more distributed computing friendly platforms and had come up with a number of solutions. However they were all being vetoed by the communication group with their corporate strategic "ownership" for everything that crossed datacenter walls (and fiercely fighting off client/server and distributed computing trying to preserve their dumb terminal paradigm). Disk division (GPD/AdStaR) software executive partial workaround was to invest in distributed computing startups that would use IBM disks, he would periodically ask us to drop in on his investments to see if we could provide any help.
DASD, Channel and I/O long winded trivia (3380, HIPPA, 9333, SSA, FCS,
FICON, etc)
https://www.linkedin.com/pulse/dasd-channel-io-long-winded-trivia-lynn-wheeler/
CEO Learson tried (and failed) to block the bureaucrats, careerists
and MBAs from destroying Watson culture and legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... 20yrs later (a couple yrs after after senior disk engineer talk,
communication group stranglehold on datacenters wasn't just disks) IBM
has one of the largest losses in the history of US companies and was
being reorged into the 13 "baby blues" in preparation for breaking up
the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup (although it wasn't long before the disk division is gone).
getting to play disk engineer in bldgs14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS, FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
communication group trying to preserve dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
former AMEX president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: When Did "Internet" Come Into Common Use Date: 27 Aug, 2024 Blog: Facebookmisc IEN:
IEN#1 - Issues in the Interconnection of Datagram Networks 1977/07/29 IEN#3 - Internet Meeting Notes 1977/08/15 IEN#5 - TCP Version 2 Specification 1977/03/ IEN#10 - Internet Broadcast Protocols 1977/03/07 IEN#11 - Internetting or Beyond NCP 1977/03/21 IEN#14 - Thoughts on Multi-net Control and Data Collection Factilities 1977/02/28
lots of email refs starting late 70s, include "internet" and "internetwork"
reference: Postel, Sunshine, and Cohen: The ARPA Internet Protocol, fourth issue (June?) of 1981 Computer Networks, pp 261-271.
lots of email signature lines starting around 1987 included: "internet: xxxxxxxx@IBM.COM" ... to differentiate from internal corporate network and bitnet/csnet.
... possibly part of the issue was enabler for modern internet started
with NSFNET which had (non-commercial) "Acceptable Use Policy"
INDEX FOR NSFNET Policies and Procedures
3 Jun 93
This directory contains information about the policies and procedures
established by the National Science Foundation Network (NSFNET) and
its associated networks. These documents were collected by the NSF
Network Service Center (NNSC). With thanks to the NNSC and Bolt
Berenek and Newman, Inc., they are now available by anonymous FTP from
InterNIC Directory and Database Services on ds.internic.net.
... example:
<NIS.NSF.NET> [NSFNET] NETUSE.TXT
Interim 3 July 1990
Acceptable Use Policy
The purpose of NSFNET is to support research and education in and
among academic institutions in the U.S. by providing access to unique
resources and the opportunity for collaborative work.
This statement represents a guide to the acceptable use of the NSFNET
backbone. It is only intended to address the issue of use of the
backbone. It is expected that the various middle level networks will
formulate their own use policies for traffic that will not traverse
the backbone.
... snip ...
trivia: I got HSDT project in early 80s, T1 and faster computer links
(both terrestrial and satellite) and was working with NSF Director
... was suppose to get $20M to interconnect the NSF Supercomputer
centers; then congress cuts the budget, some other things happen and
eventually an RFP is released (in part based on what we already had
running), From 28Mar1986 Preliminary Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: When Did "Internet" Come Into Common Use Date: 27 Aug, 2024 Blog: Facebookre:
story about how JES2 group leveraged VM370 RSCS/VNET to get both VNET
and NJE announced for customers
https://www.garlic.com/~lynn/2024e.html#46 Netscape
in comment/reply thread including about the person responsible for the
science center wide-areat network, which evolves in the internal
corporate network
https://www.garlic.com/~lynn/2024e.html#41 Netscape
https://www.garlic.com/~lynn/2024e.html#43 Netscape
https://www.garlic.com/~lynn/2024e.html#44 Netscape
https://www.garlic.com/~lynn/2024e.html#45 Netscape
https://www.garlic.com/~lynn/2024e.html#47 Netscape
and technology also used for the corporate sponsored univ BITNET
https://en.wikipedia.org/wiki/BITNET
wasn't SNA/VTAM, then marketing stopped shipping the native VNET
drivers (just the NJE simulation) ... although the internal corporate
network keeped using native VNET drivers because they were more
efficient and higher throughput ... at least until the communication
group forced the conversion of the internal corporate network to
SNA/VTAM. Meanwhile the person responsible for it all was battling for
conversion to TCP/IP ... SJMerc article about Edson (he passed
aug2020) and "IBM'S MISSED OPPORTUNITY WITH THE INTERNET" (gone behind
paywall but lives free at wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
The communication group had also been battling to not release mainframe TCP/IP, when that was overruled, they changed their tactic and said that they had corporate strategic responsibility for everything that crossed datacenter walls and it had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I then do support for RFC1044 and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times increase in bytes moved per instruction executed).
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM San Jose Date: 27 Aug, 2024 Blog: Facebookre:
old almaden TSM article (gone 404)
https://web.archive.org/web/20060312093224/http://www.almaden.ibm.com/StorageSystems/Past_Projects/TSM.shtml
TSM wiki (ibm specturm, ibm storage protect)
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager
https://en.wikipedia.org/wiki/IBM_Tivoli_Storage_Manager#History
TSM descended from a project done at IBM's Almaden Research Center
around 1988 to back up VM/CMS systems. The first product that emerged
was Workstation Data Save Facility (WDSF). WDSF's original purpose was
to back up PC/DOS, OS/2, and AIX workstation data onto a VM/CMS (and
later MVS) server. WDSF morphed into ADSTAR Distributed Storage
Manager (ADSM) and was re-branded Tivoli Storage Manager in 1999.
... snip ...
... skips part about I had 1st done CMSBACK in the late 70s at SJR for internal datacenters and over it having gone through several releases
After the first version, I worked with one of the datacenter support people for the next few releases of CMSBACK. He did a CMSBACK presentation at the VM ITE 11AM, 27Feb1980. Later, he left IBM and was doing consulting in silicon valley.
backup/archive posts
https://www.garlic.com/~lynn/submain.html#backup
GPD VM/370 Internal Technical Exchange The following is the agenda for the meeting. Tuesday, February 26, 1980: 9:00 - W.W. Eggleston - VM/370 ITE Kickoff, Mr. Eggleston is the President of GPD. 9:30 - Ray Holland - ITE Overview. 9:45 - Forrest Garnett - Dynamic Writable Shared Segment Overview. 10:00 - Jim Gray - System R, An Overview. 10:30 - Break 11:00 - Gene Svenson - Extended Networking. 11:30 - Roy Engehausen - Network management and load balancing tools. 12:00 - Lunch 1:00 - Peter Rocker - Network Response monitoning, Remote Dial Support, and VM/370 Hyper Channel attachment 1:20 - Jeff Barber - CADCOM - Series/1 to VM/370 inter-program communications. 1:35 - Noah Mendelson - PVM - Pass Through Virtual Machine Facility. 2:00 - Noah Mendelson - EDX on Series/1 as a VNET work station. 2:15 - Tom Heald - Clock - Timer Driven CMS Virtual Machine. 2:30 - Break 3:00 - Vern Skinner - 3540 Diskett Read/Write Support in CMS. 3:30 - Bobby Lie - VM/SP - System Product offering overview and discussion. 4:30 - Closing - From this point on there can be small informal sessions of points of interest. Wednesday, February 27, 1980: 9:00 - Billie Smith - Common System Plans. modifications and results. 9:30 - Claude Hans - XEDIT Update. Nagib Badre 10:00 - Graham Pursey - SNATAM - Controlling SNA devices from CMS. 10:30 - Break 11:00 - Mike Cowlishaw - REX Executor. 11:45 - Mark Brown - San Jose File Back-up System. 12:00 - Lunch 1:00 - Albert Hafner - VMBARS - VM/370 Backup and Retrieval System. 1:45 - Chris Bishoff - 6670 Office System Printer and VM/370 Tom Hutton 2:15 - Break 2:45 - Rodger Bailey - VM/370 Based Publication System. 3:15 - Dieter Paris - Photo-composition Support in DCF 3:30 - John Howard - VM Emulator Extensions. Dave Fritz 3:40 - Tom Nilson - DPPG Interavtive Strategy and VM/CMS . 4:30 - Closing - From this point on there can be small informal sessions of points of interest. *4:30 - Editor Authors - This will be an informal exchange of information on the changes comming and any input from users on edit concepts. All those wishing to express their opinions should attend. Thursday, February 28, 1980: 9:00 - Ed Hahn - VM/370 Mulit-Drop Support 9:30 - Ann Jones - Individual Password System for VM/370. 10:00 - Walt Daniels - Individual Computing based on EM/YMS 10:30 - Breaka 11:00 - Chris Stephenson - EM/370 - Extended Machine 370 and EM/YMS 12:00 - Lunch 1:00 - Simon Nash - Extended CMS Performance Monitoring Facility 1:30 - George McQuilken - Distributed Processing Maching Controls. 2:00 - Mike Cassily - Planned Security Extensions to VM/370 at the Cambridge Scientific Center 2:30 - Break 3:00 - Steve Pittman - Intra Virtual Machine Synchronization Enqueue/Dequeue Mechanisms. 3:30 - Jeff Hill - Boulder F.E. Source Library Control System.
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM DB2 Date: 28 Aug, 2024 Blog: Linkedinre:
... eagle in URLs from upthread
... pg.9 (computerhistory) refs NOMAD from National CSS .... NCSS was
a CP67 (precursor to VM/370) spin-off from IBM Cambridge Science
Center (and System/R was done on VM/370). In 60s, I was undergraduate
and at CP67 install (3rd after IBM CSC itself and MIT Lincoln Labs)
and had rewritten lots of CP67 and CMS. CSC announced a one week class
at Beverley Hills Hilton, I arrive Sunday and am asked to teach CP67
class (the people that were to teach it had resigned the friday before
to join/form NCSS). RAMIS & NOMAD
https://en.wikipedia.org/wiki/Ramis_software
https://en.wikipedia.org/wiki/Nomad_software
trivia: When we were doing HA/CMP (w/major RDBMS vendors), I didn't remember him (from IBM STL) but Oracle senior VP claimed he did the tech transfer from Endicott SQL/DS and SJR System/R to for (MVS) DB2.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
posts mentioning csc, cp67, ramis, nomad, system/r
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2023g.html#64 Mainframe Cobol, 3rd&4th Generation Languages
https://www.garlic.com/~lynn/2023.html#13 NCSS and Dun & Bradstreet
https://www.garlic.com/~lynn/2022f.html#116 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2019d.html#16 The amount of software running on traditional servers is set to almost halve in the next 3 years amid the shift to the cloud, and it's great news for the data center business
https://www.garlic.com/~lynn/2019d.html#4 IBM Midrange today?
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2014e.html#34 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2013m.html#62 Google F1 was: Re: MongoDB
https://www.garlic.com/~lynn/2013c.html#56 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2011m.html#69 "Best" versus "worst" programming language you've used?
https://www.garlic.com/~lynn/2007e.html#37 Quote from comp.object
https://www.garlic.com/~lynn/2006k.html#35 PDP-1
https://www.garlic.com/~lynn/2003n.html#15 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2003d.html#17 CA-RAMIS
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Under industry pressure, IRS division blocked agents from using new law to stop wealthy tax dodgers Date: 29 Aug, 2024 Blog: FacebookUnder industry pressure, IRS division blocked agents from using new law to stop wealthy tax dodgers. High-powered tax attorneys bemoaned the 2010 legislation meant to crack down on big-dollar tax shelters. They ended
tax fraud, tax evasion, tax loopholes, tax abuse, tax avoidance, tax
haven posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Mainframe Processor and I/O Date: 29 Aug, 2024 Blog: Facebook1988, IBM branch office asks if I can help LLNL (national lab) get some serial stuff they are playing with standardized, which quickly becomes fiber-channel standard ("FCS", including some stuff I had done in 1980), initially 1gbit/sec, full-duplex, 200mbyte/sec aggregate. Then the POK engineers finally get their stuff released in the 90s, with ES/9000 as ESCON (when it is already obsolete, 17mbyte/sec).
Later some POK engineers become involved with FCS and define a heavy-weight protocol that drastically reduces the native throughput which eventually ships as FICON. The latest public benchmark I've found is z196 "Peak I/O" getting 2M IOPS using 104 FICON (running over 104 FCS). About the same time a FCS is announced for E5-2600 server blades, claiming over million IOPS (two such FCS higher throughput than 104 FICON). IBM docs also recommended that SAPs (system assist processors that do actual I/O) be kept to 70% CPU ... which would be more like 1.5M IOPS. Also no real CKD DASD has been made for decades, all being simulated on industry standard fixed-block disks.
Note max configured z196 went for $30M (and benchmarked at 50BIPS, industry standard program loop counts compared to reference platform 370/158-3) while IBM base list price for E5-2600 server blade was $1815 (same benchmark but 500BIPS, ten times max configured z196). For a few decades, large cloud operators claim they assemble their own server blades at 1/3rd cost of brand name blades. Shortly after industry press about server chip makers were shipping at least half their product directly to cloud operators, IBM sells off its server blade business.
A large cloud operation will have score or more megadatacenters around the world, each with hundreds of thousand of server blade systems (each blade may now be tens of TIPS rather .5TIPS) and enormous automation, a megadatacenter staffed with 70-80 people.
IBM CEO Learson tried (and failed) to block the bureaucrats,
careerists, and MBAs from destroying Watson culture&legacy. 20yrs
later, (communication group stranglehold on datacenters wasn't just
disks) IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" in
preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup.
longer reference from a couple years agao
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
fibre channel standard ...
https://en.wikipedia.org/wiki/Fibre_Channel
...from 1990 FCS design review meeting at LLNL, Ancor installing 32-port prototype (expandable to 4096 port) non-blocking switch. Design would allow 132mbit and 1gbit to co-exist.
I had wanted IBM (Hursley) 9333 (80mbit, full-duplex, serial copper)
to evolve into interoperable FCS-compatible, instead it evolves into
160mbit SSA
https://en.wikipedia.org/wiki/Serial_Storage_Architecture
late 80s, I was also asked to participate in SLAC Gustavson SCI
https://www.slac.stanford.edu/pubs/slacpubs/5000/slac-pub-5184.pdf
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface
was used for scalable multiprocessor ... including Sequent's numa-q
256-processor (Sequent later bought by IBM, trivia: I did some
consulting for CTO Steve Chen before IBM purchase)
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA
As used for I/O protocol, contributed to evolution of InfiniBand
https://en.wikipedia.org/wiki/InfiniBand
other history
https://en.wikipedia.org/wiki/Futurebus
fibre-channel and FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
some posts mentioning 9333, SSA, "Peak I/O", and e5-2600
https://www.garlic.com/~lynn/2024e.html#22 Disk Capacity and Channel Performance
https://www.garlic.com/~lynn/2024c.html#60 IBM "Winchester" Disk
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2024.html#35 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023f.html#70 Vintage RS/6000 Mainframe
https://www.garlic.com/~lynn/2023d.html#104 DASD, Channel and I/O long winded trivia
https://www.garlic.com/~lynn/2022b.html#15 Channel I/O
https://www.garlic.com/~lynn/2019b.html#60 S/360
https://www.garlic.com/~lynn/2017b.html#73 The ICL 2900
https://www.garlic.com/~lynn/2016.html#19 Fibre Chanel Vs FICON
some posts mentioning SLAC, Gustavson, SCI, and Sequent
https://www.garlic.com/~lynn/2024.html#54 RS/6000 Mainframe
https://www.garlic.com/~lynn/2023g.html#106 Shared Memory Feature
https://www.garlic.com/~lynn/2023e.html#78 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2021i.html#16 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021h.html#45 OoO S/360 descendants
https://www.garlic.com/~lynn/2019c.html#53 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2019.html#32 Cluster Systems
https://www.garlic.com/~lynn/2018d.html#57 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2014m.html#176 IBM Continues To Crumble
https://www.garlic.com/~lynn/2013g.html#49 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2010.html#44 Larrabee delayed: anyone know what's happening?
https://www.garlic.com/~lynn/2006q.html#24 "25th Anniversary of the Personal Computer"
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: When Did "Internet" Come Into Common Use Date: 29 Aug, 2024 Blog: Facebookre:
IBM Science Center had (non-SNA) wide-area network in the 60s ... which evolves into the internal corporate network (also non-SNA). In 60s, IBM also marketed (360 telecommunication) controller 2701 that supported T1 speeds. With the introduction of SNA/VTAM in the mid-70s there appeared to be issues with SNA/VTAM that resulted in capping controllers at 56kbits/sec links.
Late 80s (more than two decades after 2701) came out with the SNA/VTAM 3737 controller that supported a (short-haul, terrestrial) T1 link (aggregate was 2mbit, aggregate US full-duplex 3mbit, aggregate EU full-duplex 4mbit). They had a boat load of memory and Motorola 68k processors and simulated (local) CTCA VTAM ... the local 3737 VTAM simulation would immediately ACK RUs to the local HOST VTAM ... and then transmit the RUs in the background to the remote 3737 (for forwarding to the remote HOST VTAM) ... in order to keep the traffic flowing ... otherwise the round-trip delay for ACKs resulted in very little traffic flowing (even for short-haul terrestrial T1).
Leading up to the NSFnet T1 RFP award ... there was significant amount of internal corporate misinformation email about being able to use SNA .... somebody collected it and forwarded the collection.
scenter posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
old SNA/3737 related email
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2011g.html#email881005
other TCP/IP related email
https://www.garlic.com/~lynn/2011g.html#email870328
https://www.garlic.com/~lynn/2011g.html#email881020
https://www.garlic.com/~lynn/2011g.html#email890217
https://www.garlic.com/~lynn/2011g.html#email890512
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM TPF Date: 29 Aug, 2024 Blog: LinkedinWhen 3081 shipped in early 80s, it was supposedly multiprocessor only ... however ACP/TPF didn't have multiprocessor support and there was concern that the whole ACP/TPF market would move to Amdahl (the latest Amdahl single processor had about the same MIPS/processing rate as the aggregate of 3081K two processor). Eventually IBM ships 3083 (a 3081 with one of the processors removed).
Later in the 80s, my wife did a short stint as chief architect for Amadeus (EU system built off the old Eastern Airlines "System One"). She didn't remain very long because she sided with EU on use of x.25 (instead of SNA) and the IBM communication group got her replaced. It didn't do them much good, EU went with x.25 anyway and communication group replacement was replaced.
pending 3083, there were unnatural things done to VM370 attempting to increase TPF throughput running in virtual machine on two processor 3081 ... however it decreased VM370 throughput for all other vm370 multiprocessor customers (not limited to just 3081s, but also 3033, 168, 158, etc). I got called into some of these VM370 customers, including very large gov. customer (back to 60s CP67 days) ... to look at masking the degradation (i.e. not allowed to revert their vm370 to pre-TPF custom changes).
3083J, 3083jx, 3083kx, then 9083 hand-picked 3083s that were able to run clock a little faster ... and then microcode tailored to TPF i/o patterns. Then also played with running 9083 with DAT disabled to see if that ran any faster (only capable of running TPF).
trivia: after leaving IBM in the 90s, was called into largest TPF airline res system to look at ten things they couldn't do. I was asked to start looking at "ROUTES" (about 25% of processing load) ... they gave me a complete softcopy of OAG (all commercial airline scheduled flts) and I went away for several weeks and came back with ROUTES running on RS/6000s that did all the (ROUTES) impossible things. I claimed that lots of TPF still reflected some 60s tech trade-offs, starting from scratch I could make completely different trade-offs. First pass was just existing implementation but able to handle all routes requests for all airlines for all passengers in the world (ran about 100 times faster than mainframe version). Then adding in support for the impossible things that they wanted done, made it only ten times faster than mainframe implementation (in part because 3-4 existing transactions were collapsed into all handled by single transaction). Showed it was possible to handle every ROUTE request in the world with a pool of ten RS6000/990s.
a couple past ACP/TPF, 3083, ROUTES posts
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
smp, multiprocessor, tightly-coupled posts
https://www.garlic.com/~lynn/subtopic.html#smp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: The rise and fall of IBM Date: 30 Aug, 2024 Blog: Facebook"The rise and fall of IBM" (english) 20jan1995
Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books,
1993
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
"and perhaps most damaging, the old culture under Watson Snr and Jr of
free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO
WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived
in the shadow of defeat ... But because of the heavy investment of
face by the top management, F/S took years to kill, although its wrong
headedness was obvious from the very outset. "For the first time,
during F/S, outspoken criticism became politically dangerous," recalls
a former top executive."
... snip ...
more F/S
http://www.jfsowa.com/computer/memo125.htm
just about time FS starting, 1972, CEO Learson tried (and failed) to
block bureaucrats, careerists and MBAs from destroying Watson
culture&legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Two decades later, IBM has one of the largest losses in the history of
US companies and was being reorged into the 13 "baby blues" (somewhat
recalling "baby bells" breakup of AT&T decade before) in preparation
for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup.
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM TPF Date: 30 Aug, 2024 Blog: Linkedinre:
trivia: IBM communication group was fiercely fighting off client/server and distributed computing and blocking release of mainframe TCP/IP support. When that was overturned, they change their strategy and claim that since they had corporate strategic responsibility for everything that crossed datacenter walls, it had to be released through them; what shipped got aggregate of 44kbytes/sec using nearly whole 3090 processor. I then do changes for RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, get sustained 4341 channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). I had gotten HSDT project in the early 80s, T1 and faster computer links (both terrestrial and satellite). Note in 60s, IBM offered (telecommunication controller) 2701 supporting T1 speeds. However in mid-70s with transition to SNA/VTAM, there appeared to be issues that capped SNA controllers at 56kbit/sec links (and with HSDT T1 and faster links, had all sort of battles with the communication group).
Early on was also working with NSF director and suppose to get $20M to
interconnect the NSF supercomputer centers; then congress cuts the
budget, some other things happen and eventually an RFP is released (in
part based on what we already had running), From 28Mar1986 Preliminary
Announcement:
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers
Program to provide Supercomputer cycles; the New Technologies Program
to foster new supercomputer software and hardware developments; and
the Networking Program to build a National Supercomputer Access
Network - NSFnet.
... snip ...
IBM internal politics was not allowing us to bid (being blamed for online computer conferencing inside IBM likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter (3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet.
rfc1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: RFC33 New HOST-HOST Protocol Date: 31 Aug, 2024 Blog: FacebookNetwork Working Group Request for Comments: 33
... aka, the IBM Science Center, CP/67 (360/67, virtual machine,
VNET/RSCS) 60s wide-area network (from one of the members that
invented GML in 1969):
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
... however there was a totally different kind of implementation from
60s, done for (batch) OS/360 HASP:
https://en.wikipedia.org/wiki/Houston_Automatic_Spooling_Priority
that had "TUCC" in col68-71 of the source cards (for univ. where it originated). Early 70s internal batch OS360/HASP (and follow-on batch JES2) installations wanted to connect into the CP/67 wide-area network. An issue was that HASP version was completely tied to batch OS/360 ... while VNET/RSCS had a clean layered implementation. As a result a "clean" VNET/RSCS device driver was done that simulated the HASP network implementation allowing the batch operating systems to be connected into the growing internal network.
However the HASP (and later JES2) implementations had other issues including 1) defined network nodes in spare entries in the 256 entry psuedo device table (maybe 160-180, while the internal network was already past 256 nodes) and would trash traffic where origin or destination node weren't in the local table and 2) since network fields were intermixed with job control fields ... traffic originating from HASP/JES2 system at a slightly different release/version from destination HASP/JES2, could result in crashing the destination host operating system. In the 1st case, HASP/JES2 systems were restricted to isolated boundary/edge nodes. In the 2nd case, a body of VNET/RSCS simulated HASP/JES2 drivers evolved that could recognize origin and destination HASP/JES2 systems were different release/version and re-organize header fields to correspond to specific version/release level of a directly connected destination HASP/JES2 system (there was an infamous case of local JES2 mods in San Jose, Ca were crashing JES2 systems in Hursley, England and the intermediate VNET/RSCS system was blamed because their drivers hadn't been updated to handle the local changes in San Jose). Another case was the west coast STL (now SVL) lab and Hursley installed double-hop satellite link (wanting to use each other systems off-shift) ... it worked fine with VNET/RSCS but wouldn't work at all with JES2 (because their telecommunication layer had a fixed round-trip max delay ... and double hop round trip exceeded the limit).
Person responsible for the VNET/RSCS implementation
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
The VNET/RSCS technology was also used for the corporate sponsored
univ BITNET
https://en.wikipedia.org/wiki/BITNET
SJMerc article about Edson (he passed aug2020) and "IBM'S MISSED
OPPORTUNITY WITH THE INTERNET" (gone behind paywall but lives free at
wayback machine)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
Also from wayback machine, some additional (IBM missed) references
from Ed's website
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm
At the time of the great cutover to internetworking 1/1/1983, ARPANET
had approx. 100 network IMPs and 255 hosts, at a time when the
internal network was rapidly approaching 1000 nodes. Archived post
that includes list of world-wide corporate locations that added one or
more nodes during 1983:
https://www.garlic.com/~lynn/2006k.html#8
I've commented that the requirement and availability of IMPs possibility contributed to not growing faster. The IBM internal network had a different kind of limitation (besides having to be corporate installation), corporate required that all links had to be encrypted ... and periodically faced gov. resistance (especially where links crossed gov. borders, then later communication group veto'ing anything that wasn't true mainframe host).
trivia: as undergraduate I had been hired fulltime responsible for OS/360 running on 360/67 (univ originally got it for tss/360 which never came to fruition). then the science center came out to install CP67 (3rd installation after Cambridge itself and MIT Lincoln Labs). I mostly got to play with it during my weekend dedicated time; aka the univ. shutdown the datacenter on weekends and I had it all to myself (although 48hrs w/o sleep made Monday classes hard). In the early 80s at IBM, I got HSDT project, T1 and faster computer links (both terrestrial and satellite). Note IBM marketed telecommunication controller 2701 in the 60s that supported T1 links. Then in the mid-70s, the company moved to SNA/VTAM ... and possibly because of SNA/VTAM issues, controllers became capped at 56kbit links. As a result, I would have various run-ins with the communication products group.
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
some past posts that reference 60s 2701 and late 80s 3737
https://www.garlic.com/~lynn/2024e.html#91 When Did "Internet" Come Into Common Use
https://www.garlic.com/~lynn/2024b.html#62 Vintage Series/1
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2011g.html#75 We list every company in the world that has a mainframe computer
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM CICS Date: 31 Aug, 2024 Blog: Facebookrecent post here, originally posted in the internet group
One of CICS features was to acquire as much OS/360 resources as feasible/practical at startup and provide services within CICS (because OS/360 resource management was extremely heavy-weight and bloated); file opens, storage, tasking, etc (contributed to not having multiprocessor support, as alternative running increasing numbers of concurrent CICS instances; turn of century visited machine room with banner above mainframe saying it was running 129 concurrent CICS "instances").
CICS/BDAM posts
https://www.garlic.com/~lynn/submain.html#cics
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: COBOL history, Article on new mainframe use Newsgroups: comp.arch Date: Sat, 31 Aug 2024 23:38:38 -1000John Levine <johnl@taugh.com> writes:
web bages gone 404, but lives on at wayback machine
https://web.archive.org/web/20180402200149/http://www.bobbemer.com/HISTORY.HTM
Bemer wiki
https://en.wikipedia.org/wiki/Bob_Bemer
He served on the committee which amalgamated the design for his COMTRAN
language with Grace Hopper's FLOW-MATIC and thus produced the
specifications for COBOL. He also served, with Hugh McGregor Ross and
others, on the separate committee which defined the ASCII character
codeset in 1960, contributing several characters which were not formerly
used by computers including the escape (ESC), backslash (\), and curly
brackets ({}).[3] As a result, he is sometimes known as The Father of
ASCII.[1]
... snip ..
COMTRAN wiki
https://en.wikipedia.org/wiki/COMTRAN
COMTRAN (COMmercial TRANslator) is an early programming language
developed at IBM. It was intended as the business programming equivalent
of the scientific programming language FORTRAN (FORmula TRANslator). It
served as one of the forerunners to the COBOL language. Developed by Bob
Bemer, in 1957, the language was the first to feature the programming
language element known as a picture clause.
... snip ...
COMTRAN manual
https://bitsavers.org/pdf/ibm/7090/F28-8043_CommercialTranslatorGenInfMan_Ju60.pdf
Bob Bemer papers
https://oac.cdlib.org/findaid/ark:/13030/c8j96cf7/
He later played a key role in the development of the COBOL programming
language, which drew on aspects of Bemer's COMTRAN programming language
developed at IBM. Bemer is credited with the first public identification
of the Y2K problem, publishing in 1971 his concern that the standard
representation of the year in calendar dates within computer programs by
the last two digits rather than the full four digits would cause serious
errors in confusing the year 2000 with the year 1900.
... snip ...
posts memtioning Bob Bemer and Cobol:
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2014f.html#78 Over in the Mainframe Experts Network LinkedIn group
https://www.garlic.com/~lynn/2014e.html#52 Rather nice article on COBOL on Vulture Central
https://www.garlic.com/~lynn/2013i.html#49 Internet Mainframe Forums Considered Harmful
https://www.garlic.com/~lynn/2011k.html#6 50th anniversary of BASIC, COBOL?
https://www.garlic.com/~lynn/2011j.html#67 Wondering if I am really eligible for this group
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: RFC33 New HOST-HOST Protocol Date: 01 Sep, 2024 Blog: Facebookre:
... "job entry" ... original HASP NJE from "TUCC", some HASP RJE, CRJE and other trivia
360/67 terminal controller came installed with 1052 & 2741 port
scanners, univ had some tty terminals and so ascii/tty port scanner
arrived in heathkit box to be installed. cp/67 arrived with 1052&2741
terminal support with automagic switching port/line to appropriate
terminal-type scanner type. I then add ascii/tty terminal support to
CP67 integrated with automagic port scanner switching. I then wanted
single dial-in number ("hunt" group) for all terminal types ... didn't
quite work since IBM had taken short cut and had hardwired port
line-speed. This kicked off a clone controller project at univ, build
channel interface card for Interdata/3 programmed to emulate IBM
controller (with the addition of auto line-speed) ... later upgraded
to Interdata/4 for channel interface and cluster of Interdata/3s for
port interfaces. This is sold as clone controller by Interdata and
later Perkin-Elmer (later four of us are written up for some part of
IBM clone controller business)
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
Later, I modify MVT18 HASP, removing 2780/RJE support code (to reduce memory requirements) and replace it with 2741&tty terminal support and editor (implementing CMS edit syntax) for CRJE (conversational remote job entry). The univ had also gotten a 2250 graphics display with 360/67. I then modify CMS ediitor to interface to Lincoln Labs 2250 library (for fullscreen display).
part of 68 SHARE presentation, 360/67 768kbytes, MFT14 kernel
82kbytes, HASP 118kbytes (with 1/3rd 2314 track buffering)
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
My 1st os/360 SYSGEN had been MFT9.5 ... ran student fortran job in well over minute (360/67 had replaced 709 tape->tape where it had been under a second). I install HASP which cuts the time in half. I then start redoing STAGE2 SYSGEN for careful datasets and PDS member placement to optimize arm seek and multi-track search, reducing time another 2/3rds to 12.9secs (never got better than 709 until I install Univ. of Waterloo WATFOR). When CP67 initially installed my OS360 benchmark jobstream that ran 322secs on bare hardware, ran 856secs in virtual machine. The next few months I rewrite lots of CP67 to reduce virtual machine overhead running OS360, getting it down to 435secs (reducing CP67 CPU from 534secs to 113secs).
HASP/JES2, ASP/JES3, NJE/NJI posts
https://www.garlic.com/~lynn/submain.html#hasp
clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: PROFS, SCRIPT, GML, Internal Network Date: 04 Sep, 2024 Blog: FacebookPROFS group collected a lot of internal (CMS) command line apps for wrapping menus around (for the less computer literate) ... including a very early source version of VMSG for the email client; when the VMSG author tried to offer them a much enhanced version of VMSG ... an attempt was made to get him fired. The whole thing quieted down when the VMSG author showed that every PROFS email contained his initials in a non-displayed field. After that the VMSG author only shared his source with me and then one other person.
CMS SCRIPT was a port/rewrite of MIT CTSS/7094 RUNOFF to CMS. Then
when GML was invented at the science center in 1969, GML tag
processing was added to SCRIPT. From one of the GML inventors about
the science center wide-area network (that morphs into into the IBM
internal network)
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
more in recent long-winded (public) Internet&Mainframe posts
https://www.garlic.com/~lynn/2024e.html#95 RFC33 New HOST-HOST Protocol
https://www.garlic.com/~lynn/2024e.html#98 RFC33 New HOST-HOST Protocol
trivia: one of the first mainstream IBM documents done in SCRIPT was the 370 architecture "redbook" (for being distributed in red 3-ring binders). A CMS SCRIPT command line option would generate either the 370 Principle of Operations or the the full redbook with architecture notes, implementation details, and various alternatives considered.
ibm science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
SCRIPT, GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
In the early days of 3270 terminals, still part of the annual budget process and requiring VP-level sign-off (even tho we showed that it was about the same cost as 3yr wrie-off for desk phone), we would have friday after work in the San Jose plant area. One of the hot topics was getting employees and managers to use computers. One of the things we came up with (besides email) was online telephone books. We decided that Jim Gray would spend one week to implement a phone book lookup application (that would be much faster a person could do lookup in paper copy) and I would spend one week implementing procedures that acquired softcopy versions of IBM internal paper printed phonebooks and converting them to online lookup format.
In this period there was also a rapidly spreading rumor that members of the corporate executive committee had started using email for communication ... and started seeing managers redirecting 3270s deliveries to their desks (even tho it was purely facade, sitting all day powered on with login screen or PROFs menu being burned into the screen (and administrative staff actually handling any email).
Email actually dating back to MIT CTSS/7094 days (aka, some of the CTSS had gone to the 5th flr to do Multics and others went to the IBM science center on the 4th flr to do virtual machines, online apps, wide-area network morphing into IBM internal network, invent GML, etc).
CTSS EMAIL history
https://www.multicians.org/thvv/mail-history.html
IBM CP/CMS had electronic mail as early as 1966, and was widely used
within IBM in the 1970s. Eventually this facility evolved into the
PROFS product in the 1980s.
... snip ..
https://www.multicians.org/thvv/anhc-34-1-anec.html
some posts mentioning script, gml, network, vmsg, profs
https://www.garlic.com/~lynn/2023c.html#78 IBM TLA
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018.html#18 IBM Profs
https://www.garlic.com/~lynn/2015g.html#98 PROFS & GML
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
posts mentioning 3270 deliveries getting redirected to managers' desks
https://www.garlic.com/~lynn/2023f.html#71 Vintage Mainframe PROFS
https://www.garlic.com/~lynn/2023.html#118 Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://www.garlic.com/~lynn/2022d.html#77 "12 O'clock High" In IBM Management School
https://www.garlic.com/~lynn/2021k.html#79 IBM Fridays
https://www.garlic.com/~lynn/2021h.html#60 PROFS
https://www.garlic.com/~lynn/2017d.html#70 IBM online systems
https://www.garlic.com/~lynn/2017b.html#80 The ICL 2900 Buying a computer in the 1960s
https://www.garlic.com/~lynn/2013b.html#58 Dualcase vs monocase. Was: Article for the boss
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 360, 370, post-370, multiprocessor Date: 05 Sep, 2024 Blog: Facebookoriginal 370 were 115, 125, 135, 145, 155, 165. Precursor to 165 was 360/85. Then the Future System effort that was totally different from 370 and was going to replace it (claim that during FS, internal politics was shutting down 370 efforts, and lack of new 370 was what gave clone 370 makers there market foothold). Overlapping was some 370 tweaks, 115-II, 125-II, 138, 148, 158, 168. When FS finally implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033&3081 in parallel ... some more detail
3033 was 168 logic remapped to 20% faster chips (303x channel director was 158 engine with just the integrated channel microcode; a 3031 was two 158 engines, one with just 370 microcode and 2nd with just channel microcode; 3032 was 168 tweaked to use the 303x channel director for external channels). As mentioned, 3081 was some tweaked FS technology ... but enormous increase in circuits compared to any product in that time (possibly spawning TCMs), two processor 3081D aggregate was less than Amdahl single processor, then doubled 3081 processor cache and 3081K aggregate was about same as Amdahl single processor, although two processor MVS multiprocessor support claimed it only got 1.2-1.5 times throughput of single processor, making Amdahl single processor much greater MVS throughput throughput than 3081K). Also 308x originally was only going to be multiprocessor, but ACP/TPF didn't have multiprocessor support and afraid that market would all move to Amdahl ... which prompts coming out with 3083 (3081 with one of the processors removed).
Once 3033 was out the door, the 3033 processor engineers start on trout/3090.
Account of end of ACS/360 .... folklore it was killed because concern
that it would advance state-of-the-art too fast and IBM would loose
control off the market. Amdahl then leaves shortly later ... also
gives an account of features that showup in the 90s with ES/9000
https://people.computing.clemson.edu/~mark/acs_end.html
trivia: with FS imploding there was a project to do 16-processor 370 machine that I got roped into helping and we con'ed the 3033 processor engineers to work on in their spare time (lot more interesting than remapping 168 logic to faster chips). Everybody thought it was great until somebody tells head of POK that it could be decades before POK's favorite son operating system (MVS) had (effective) 16-processor support (bloated multiprocessor support, two processor only getting 1.2-1.5 times single processor and overhead increasing as number of processors go up, POK doesn't ship 16 processor machine until after the turn of the century). Then some of us are invited to never visit POK again and the 3033 processor engineers instructed to heads down on 3033 and no distractions.
FAA account, The Brawl in IBM 1964
https://www.amazon.com/Brawl-IBM-1964-Joseph-Fox/dp/1456525514
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some recent posts mentioning end of ACS/360
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2024e.html#37 Gene Amdahl
https://www.garlic.com/~lynn/2024e.html#18 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#65 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#52 backward architecture, The Design of Design
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024c.html#0 Amdahl and IBM ACS
https://www.garlic.com/~lynn/2024b.html#105 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#103 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#91 7Apr1964 - 360 Announce
https://www.garlic.com/~lynn/2024.html#116 IBM's Unbundling
https://www.garlic.com/~lynn/2024.html#98 Whether something is RISC or not (Re: PDP-8 theology, not Concertina II Progress)
https://www.garlic.com/~lynn/2024.html#90 IBM, Unix, editors
https://www.garlic.com/~lynn/2024.html#64 IBM 4300s
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2024.html#11 How IBM Stumbled onto RISC
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: I learned the language of computer programming in my 50s - here's what I discovered Date: 05 Sep, 2024 Blog: FacebookI learned the language of computer programming in my 50s - here's what I discovered
I was blamed for online computer conferencing on the corporate internal network (larger than arpanet/internet from just about the beginning until sometime mid/late 80s) in the late 70s and early 80s, it really took off spring 1981 when I distributed trip report of visit to Jim Gray at Tandem ... only about 300 actively participated but claims 25,000 were reading, folklore is when the corporate executive committee was told, 5of6 wanted to fire me.
Among the outcomes was taskforce to study the phenomena (Hiltz and Turoff, "Network Nation", were hired to participate) and officially sanctioned computer conferencing software and moderated forums.
One of the other outcomes was a researcher was hired to study how I communicated, sat in the back of my office for nine months taking notes on face-to-face, phone calls; got copies of all my incoming and outgoing email and logs of all my instant messages ... resulted in corporate research reports, conference talks, papers, books and Stanford Phd (joint between language and computer AI, Winograd was advisor on computer side). The researcher had been an ESL (English as 2nd language) instructor in prior life and claimed that I have all the characteristics of a non-English speaker, although I have no other natural language ... hypothesis was I do some sort of abstract thinking that I then have to (try to) translate.
The science center came out and installed CP/67 (3rd installation after CSC itself and MIT Lincoln Labs) ... I spent 1st few months reducing CP67 CPU running OS/360 in virtual machine. I then start doing some "zero coding" ... accomplishing things (implicitly) in CP67 as side-effect of slightly reorganizing other code (in this case not translating to explicit coding but implicit side-effect of code organization). It took me a few years to learn that when other people modified the code, all sorts of "implicit" things might stop working
HTML trivia: Some of the MIT CTSS/7090 people had gone to 5th flr for
multics, others went to the IBM science center on 4th flr for virtual
machines (CP40 morphs into CP67 when 360/67 becomes available,
precursor to VM370), CSC CP67 wide-area network (morphing into
internal corporate network), various online and performance apps. CTSS
RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
as rewritten for CP67 as "SCRIPT". Then GML was invented at science
center in 1969 and GML tag processing added to SCRIPT. After a decade,
GML morphs into ISO standard SGML. Then after another decade, morphs
into HTML at CERN
http://infomesh.net/html/history/early/
https://info.cern.ch/hypertext/WWW/MarkUp/Connolly/MarkUp.html
and the first webserver outside Europe is at the CERN "sister"
location on the SLAC VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
some posts mentioning univ undergraduate, responsible for os/360, cp/67
https://www.garlic.com/~lynn/2024d.html#90 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#0 time-sharing history, Privilege Levels Below User
https://www.garlic.com/~lynn/2024b.html#95 Ferranti Atlas and Virtual Memory
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2024.html#94 MVS SRM
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#17 IBM Embraces Virtual Memory -- Finally
https://www.garlic.com/~lynn/2024.html#12 THE RISE OF UNIX. THE SEEDS OF ITS FALL
https://www.garlic.com/~lynn/2023g.html#82 Cloud and Megadatacenter
https://www.garlic.com/~lynn/2023g.html#54 REX, REXX, and DUMPRX
https://www.garlic.com/~lynn/2023g.html#1 Vintage TSS/360
https://www.garlic.com/~lynn/2023f.html#109 CSC, HONE, 23Jun69 Unbundling, Future System
https://www.garlic.com/~lynn/2023f.html#17 Video terminals
https://www.garlic.com/~lynn/2023e.html#29 Copyright Software
https://www.garlic.com/~lynn/2019d.html#68 Facebook Knows More About You Than the CIA
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rise and Fall IBM/PC Date: 05 Sep, 2024 Blog: FacebookIBM/PC got a big bump with large corporations ordering tens of thousands for 3270 terminal emulation (with a little bit of desktop computing on the side) ... however then as PCs (& workstations) got more powerful, the communication group was fiercely fighting off client/server and distributed computing ... trying to preserve their dumb terminal paradigm ... including severely performance kneecapping PS2 cards. Example was AWD did their own (PC/AT bus) cards for PC/RT ... however for RS/6000 w/microchannel, AWD was told that they couldn't do their own cards but had to use the PS2 microchannel cards. It turned out the PC/RT 4mbit token-ring card had higher card throughput than the PS2 microchannel 16mbit token-ring card ($69 10mbit "cat wiring" Ethernet cards also had much higher throughput than the $800 microchannel 16mbit token-ring card)
Late 80s, a senior disk engineer got a talk scheduled at internal,
world-wide, communication group conference supposedly on 3174
performance, but opened the talk with the statement that the
communication group was going to be responsible for the demise of the
disk division. The disk division was seeing data fleeing datacenters
to more distributed computing friendly platforms with drop in disk
sales. The disk division had come up with a number of solutions but
they were constantly veto'ed by the communication group (with their
corporate strategic responsibility for everything that crossed the
datacenter walls). The communication group stranglehold on datacenters
wasn't just disks and a couple years later, IBM had one of the worst
losses in the history of US companies and was being re-orged into the
13 "baby blues" (take-off on the early 80s breakup of AT&T "baby
bells") in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk asking if we could help with the breakup of the company. Before we get started, the board brings in the former president of AMEX that (somewhat) reverses the breakup.
communication group stranglehold on datacenters
https://www.garlic.com/~lynn/subnetwork.html#terminal
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
.... some history of PC market
https://arstechnica.com/features/2005/12/total-share/
https://arstechnica.com/features/2005/12/total-share/2/
https://arstechnica.com/features/2005/12/total-share/3/
80-83 w/IBMPC
https://arstechnica.com/features/2005/12/total-share/4/
84-86 (w/IBMPC clones)
https://arstechnica.com/features/2005/12/total-share/5/
87-89 (w/IBMPC clones)
https://arstechnica.com/features/2005/12/total-share/6/
90-93 (w/IBMPC clones)
https://arstechnica.com/features/2005/12/total-share/7/
https://arstechnica.com/features/2005/12/total-share/8/
https://arstechnica.com/features/2005/12/total-share/9/
75-2005
https://arstechnica.com/features/2005/12/total-share/10/
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rise and Fall IBM/PC Date: 06 Sep, 2024 Blog: Facebookre:
other PC trivia: head of POK (mainframe) went to Boca to head up PS2 and hired Dataquest (since acquired by Gartner) to do detailed study of PC business and its future, including a video taped round table of discussion by silicon valley experts. I had known the person running the Dataquest study for years and was asked to be one of the silicon valley experts (they promised to obfuscate by vitals so Boca wouldn't recognize me as IBM employee ... and I cleared it with my immediate management). I had also been posting SJMN sunday adverts of quantity one PC prices to IBM forums for a number of years (trying to show how out of touch with reality, Boca forecasts were).
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
some past posts mentioning Dataquest study:
https://www.garlic.com/~lynn/2023g.html#59 IBM Downfall
https://www.garlic.com/~lynn/2023b.html#13 IBM/PC
https://www.garlic.com/~lynn/2022h.html#109 terminals and servers, was How convergent was the general use of binary floating point?
https://www.garlic.com/~lynn/2022h.html#104 IBM 360
https://www.garlic.com/~lynn/2022h.html#38 Christmas 1989
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2021f.html#72 IBM OS/2
https://www.garlic.com/~lynn/2019e.html#27 PC Market
https://www.garlic.com/~lynn/2017h.html#113 IBM PS2
https://www.garlic.com/~lynn/2017f.html#110 IBM downfall
https://www.garlic.com/~lynn/2017d.html#26 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2017b.html#23 IBM "Breakup"
https://www.garlic.com/~lynn/2014l.html#46 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2008d.html#60 more on (the new 40+ yr old) virtualization
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Rise and Fall IBM/PC Date: 06 Sep, 2024 Blog: Facebookre:
... no results (from that I could tell). later person left boca/ibm as CEO for hire ... 1st for Perot Systems to take it public. Then a few yrs after leaving IBM, I was doing some work for financial outsourcing institution and was asked to spend year in Seattle helping some area companies with electronic commerce ... and he is CEO of Seattle area security company (including having contract with m'soft to implement Kerberos in NT ... which becomes active directory) and have monthly meetings with him.
M'soft was also doing joint work on new outsourcing service and wanted to platform it on NT. Everybody elected me to explain to M'soft CEO that it had to be on SUN (not NT). Shortly before I was scheduled with M'soft CEO, some M'soft executives made the strategic decision that the outsourcing service would be slowly ramped up (keeping it to level that could be handled by NT performance).
Kerberos (public key) posts
https://www.garlic.com/~lynn/subpubkey.html#kerberos
Hadn't spent much time in Seattle for 30yrs when I was undergraduate and had been hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computing Services (consolidate all dataprocessing into independent business unit) ... when I graduate, I join IBM science center (instead of staying with Boeing CFO)
a few recent posts mentioning Boeing CFO:
https://www.garlic.com/~lynn/2024e.html#67 The IBM Way by Buck Rogers
https://www.garlic.com/~lynn/2024e.html#58 IBM SAA and Somers
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024e.html#13 360 1052-7 Operator's Console
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#79 Other Silicon Valley
https://www.garlic.com/~lynn/2024d.html#76 Some work before IBM
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#40 ancient OS history, ARM is sort of channeling the IBM 360
https://www.garlic.com/~lynn/2024d.html#22 Early Computer Use
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#9 Boeing and the Dark Age of American Manufacturing
https://www.garlic.com/~lynn/2024b.html#111 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#63 Computers and Boyd
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#77 Boeing's Shift from Engineering Excellence to Profit-Driven Culture: Tracing the Impact of the McDonnell Douglas Merger on the 737 Max Crisis
https://www.garlic.com/~lynn/2024.html#73 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2024.html#43 Univ, Boeing Renton and "Spook Base"
https://www.garlic.com/~lynn/2024.html#25 1960's COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME Origin and Technology (IRS, NASA)
https://www.garlic.com/~lynn/2024.html#23 The Greatest Capitalist Who Ever Lived
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 801/RISC Date: 06 Sep, 2024 Blog: FacebookIBM 801/RISC
I would claim that a motivation for 801/risc was to go the opposite of
the complex Future System effort from the 1st half of the 70s,
completely different than 370 and was going to completely replace 370s
(during FS, internal politics was killing off 370 efforts, claims that
the lack of new 370 products during FS is credited with giving the
clone 370 makers, their market foothold). When FS implodes, there is a
mad rush to get stuff into the 370 product pipelines, including
kicking off quick&dirty 3033&3081 efforts.
http://www.jfsowa.com/computer/memo125.htm
Then late 70s, there was several 801/RISC activities to replace a wide
variety of internal CISC microprocessors, controllers, mid-range 370s
follow-on to 4331/4341 (4361/4383), follow-on to S38 (as/400). For a
variety of reasons these efforts floundered and things returned to
CISC (and some number of RISC engineers left IBM for other vendors). I
contributed to white paper that instead of 370 microcode on CISC or
RISC microprocessor, it was possible to implement nearly the whole 370
directly in circuits. IBM Boeblingen lab did a 3-chip "ROMAN" 370,
directly in circuits with the performance of 370/168.
https://en.wikipedia.org/wiki/IBM_AS/400#Fort_Knox
The ROMP 801/RISC (running PL.8/CP.r) was for OPD to do the follow-on
to the Displaywriter. When that was canceled (in part because market
moving to PCs), it was decided to pivot to the UNIX workstation market
and the company that had done the AT&T Unix port to IBM/PC for PC/IX,
is hired to do one for ROMP ... which becomes PC/RT and AIX.
https://en.wikipedia.org/wiki/IBM_RT_PC
IBM Palo Alto that was working on BSD unix port to 370, then
redirected to the PC/RT, which becomes "AOS" (for the PC/RT).
Then work begins on the 6-chip RIOS for POWER and RS/6000. My wife and I get the HA/6000 project, initially for the NYTimes to port their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP when we start doing technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with the RDBMS vendors (Oracle, Sybase, Informix, Ingres).
Then the executive we report to moves over to head up the AIM
https://en.wikipedia.org/wiki/AIM_alliance
Somerset effort to do a single-chip power/pc (including adopting the
morotola RISC M81K cache and cache consistency for supporting shared
memory, tightly-coupled, multiprocessor)
https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/PowerPC_600
https://wiki.preterhuman.net/The_Somerset_Design_Center
Early Jan92, there is HA/CMP meeting with Oracle CEO where AWD/Hester
tells Ellison that we would have 16processor clusters by mid92 and
128processor clusters by ye92. Then by late Jan92, cluster scale-up is
transferred for announce as IBM supercomputer (for
technical/scientific *ONLY*) and we are told we can't work with
anything that has more than four processors (we leave IBM a few months
later).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Comparison of high-end mainframe and RIOS/POWER cluster, which
possibly contributed to performance kneecapping (commercial) HA/CMP
systems (industry MIPS benchmark, not actual instruction count, but
number of benchmark program iterations compared to reference
platform):
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-system: 2016MIPs, 128-system: 16,128MIPS
By late 90s, the i86 chip vendors were developing technology with
hardware layer that translates i86 instructions into RISC micro-ops
for execution, largely negating the performance difference between
801/RISC and i86.
1999 single IBM PowerPC 440 hits 1,000MIPS
1999 single Pentium3 (translation to RISC micro-ops for execution)
hits 2,054MIPS (twice PowerPC 440)
2003 single Pentium4 processor 9.7BIPS (9,700MIPS)
2010 E5-2600 XEON server blade, two chip, 16 processor aggregate
500BIPS (31BIPS/processor)
trivia: some Stanford people had approached IBM Palo Alto Scientific
Center about IBM producing a workstation they had developed. PASC
invites a few IBM labs/centers to a review, all the reviewers claim
what they were doing was much better than the Stanford workstation
(and IBM declines). The Stanford people then form their own company to
produce SUN workstations.
trivia2: folklore is the large scale naval carrier war games after turn of century had (silent) diesel/electric submarines taking out the carrier.
801/RISC, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, PowerPC posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 801/RISC Date: 07 Sep, 2024 Blog: Facebookre:
... well, after FS implodeed was pulled into help on 16processor tightly-coupled, shared memory, multiprocessor. then 1976, there was adtech conference in POK (last one for a few years since so many adtech groups were being pulled into the 370 development breach trying to get 370 efforts going again after FS imploded) and the 801 group was also presenting. One of the 801 group gave me a bad time saying he had looked at vm370 source and found no multiprocessor support.
one of my hobbies after joining IBM science center was enhanced production operation systems for internal datacenters (and world-wide, online sales&marketing HONE was long time customer). In parallel, Charlie was inventing compare&swap instruction (CAS chosen because they are charlie's initials) when he was working on CP67 multiprocessing fine-grain locking. Had meetings with 370 architecture "owners" trying to justify CAS, push back from POK favorite son operating system was 360 test&set was sufficient for SMP operation. Challenge was come up with other uses (to justify adding to 370), resulting in use for multithreaded applications (like large DBMS) ... examples later appearing in 370 PoP.
with decision to add virtual memory to all 370s, some split off from CSC to do VM370 ... in the morph of CP67->VM370, lots of CP67 features were simplified/dropped, including dropping SMP support. Starting with VM370R2, I started CSC/VM adding lots of CP67 stuff back into VM370, not initially SMP support, but did do kernel reorg needed for SMP. US HONE datacenters were consolidated in Palo Alto (trivia: when facebook 1st moves into silicon valley, it is new bldg built next door to former HONE datacenter) with loosely-coupled, shared dasd operation ... load-balancing and fall-over support ... growing to eight systems. Then for VM370R3 I add SMP, tightly-coupled multiprocessing into CSC/VM, initially for HONE so they could add a 2nd processor to each (loosely-coupled) system (for 16 processors total) ... each system getting twice throughput of single-processor system (at a time when POK favorite son operating system documents was its 2-processor only got 1.2-1.5 times throughput of single processor). I then get dragged into help with a 16processor, tightly-coupled, SMP.
Lots of people thought the 16-processor SMP was really great (including being able to con the 3033 processor engineers into working on it in their spare time, lots more interesting than remapping 168 logic to 20% faster chips) until somebody told the head of POK that it could be decades before POK's operating system had (effective) 16-way support (only getting 1.2-1.5 for 2-way and overhead increasing as no. processors increase, POK doesn't ship 16-way until after turn of the century). Then head of POK invites some of us to never visit POK again (and 3033 processor engineers directed heads down and no distractions).
other trivia: not long later, I transfer from CSC to SJR on west coast. Then in the early 80s for being blamed for online computer conferencing and other transgressions, I was transferred from SJR to Yorktown, left to live in San Jose, but had to commute to YKT a couple times a month (work in San Jose mondays, red-eye SFO->kennedy, usually in research by 6am, John liked to go out after work ... there were sometimes that I didn't check into the hotel until 3am weds morning).
801/RISC, Iliad, ROMP, RIOS, PC/RT, RS/6000, Power, PowerPC posts
https://www.garlic.com/~lynn/subtopic.html#801
SMP, tightly-coupled, multiprocess posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
CSC/VM posts
https://www.garlic.com/~lynn/submisc.html#cscvm
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: How the Media Sanitizes Trump's Insanity Date: 07 Sep, 2024 Blog: FacebookHow the Media Sanitizes Trump's Insanity. The political press's efforts to rationalize Trump's incoherent statements are eroding our shared reality and threatening informed democracy.
more from 4yrs ago:
as his sister says .... he lies, cheats, is cruel, has no principles,
can't be trusted, doesn't read, his major past accomplishments were
five bankruptcies (being bailed out by the Russians) and paying
somebody to take his SATs, implying totally self-centered and doesn't
understand doing anything that doesn't directly benefit him. ... and
his niece ... Trump's base loved that he was a liar and a cheat -- but
now it's coming back to bite them. Rooting for a massive jerk to stick
it to the liberals is super fun -- until he's lying about Americans
dying
https://www.salon.com/2020/08/04/trumps-base-loved-that-he-was-a-liar-and-a-cheat--but-now-its-coming-back-to-bite-them/
Fox Reporter Jennifer Griffin Snaps Back at Trump's Call for Fox to
Fire Her Because She Confirmed His Grotesque and Crude Disdain for
Dead Soldiers: "My Sources Are Unimpeachable"
https://buzzflash.com/articles/fox-reporter-jennifer-griffin-to-trumps-call-for-fox-to-fire-her-because-she-confirmed-his-grotesque-disdain-for-dead-soldiers-my-sources-are-unimpeachable
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Auto C4 Taskforce Date: 08 Sep, 2024 Blog: Facebook1990 was asked to be one of the IBM reps to the auto industry C4 task force (planning on heavily using technology so invited tech company reps).
In the 70s, cheap foreign cars were taking the market, congress sets import quotas giving US car companies huge profits that they were supposed to use to completely remake themselves, but they just pocketed the money and continued business as usual. Foreign car makers found that at the quotas, they could switch to high-end, higher profit cars (further reducing pressure on US made car prices) ... and cut the time to turn out completely new car from 7-8yrs to 3-4yrs.
In 1990 US makers were finally looking at make-over, US car makers were still taking 7-8yrs to turn out new models while foreign makers were cutting elapsed time in half again (to 18-24months) ... allowing them to adapt much faster to new technologies and changing customer preferences. I would ask the IBM mainframe rep to the task force what they were suppose to contribute since they had some of the same problems.
Aggravating situation was the US car makers had spun off much of their part businesses and were finding that parts for 7-8yr old designs were no longer available and they had additional delays for redesign to use currently available parts.
One of the things US auto industry did was run two car design groups in parallel ... offset by 4yrs ... so it looked as if they could come out with something more timely (as opposed to just more frequently)
C4 taskforce posts
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Seastar and Iceberg Date: 10 Sep, 2024 Blog: FacebookIBM had tried to do Seastar (but was going to be late 1995 slipping to 1997) which was meant to compete with STK ICEBERG (which IBM eventually logo'ed); from internal jun1992 forum (just before leaving IBM):
... and archived email from 30Dec1991 about last (emulated) CKD DASD
(aka 3380 was emulated, can be seen in records/track calculation
required record lengths to rounded up to "fixed" cell-size) canceled
and all future will be fixed-block DASD with simulated CKD.
https://www.garlic.com/~lynn/2019b.html#email911230
also mentions that we were working on making (national lab) LLNL's filesystem (branded "Unitree") available on HA/CMP (HA/CMP had started out HA/6000, originally for NYTimes to move their newspaper system "ATEX" off VAXCluster to RS/6000, I rename it HA/CMP when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors; Oracle, Sybase, Ingres, Informix). Then cluster scale-up is transferred for announce as IBM "supercomputer" (for technical/scientific *ONLY*) and we were told we couldn't work with anything that had more than four processors (we leave IBM a few months later).
trivia, IBM had been selling IBM S/88 (85-93, logo'ed fault tolerant
box) ... then S/88 product administer started taking us around to
their S/88 customers.
https://en.wikipedia.org/wiki/Stratus_Technologies
other STK from long ago and far way ...
Date: 04/23/81 09:57:42
To: wheeler
your ramblings concerning the corp(se?) showed up in my reader
yesterday. like all good net people, i passed them along to 3 other
people. like rabbits interesting things seem to multiply on the
net. many of us here in pok experience the sort of feelings your mail
seems so burdened by: the company, from our point of view, is out of
control. i think the word will reach higher only when the almighty $$$
impact starts to hit. but maybe it never will. its hard to imagine one
stuffed company president saying to another (our) stuffed company
president i think i'll buy from those inovative freaks down the
street. '(i am not defending the mess that surrounds us, just trying
to understand why only some of us seem to see it).
bob tomasulo and dave anderson, the two poeple responsible for the
model 91 and the (incredible but killed) hawk project, just left pok
for the new stc computer company. management reaction: when dave told
them he was thinking of leaving they said 'ok. 'one word. 'ok. ' they
tried to keep bob by telling him he shouldn't go (the reward system in
pok could be a subject of long correspondence). when he left, the
management position was 'he wasn't doing anything anyway. '
in some sense true. but we haven't built an interesting high-speed
machine in 10 years. look at the 85/165/168/3033/trout. all the same
machine with treaks here and there. and the hordes continue to sweep
in with faster and faster machines. true, endicott plans to bring the
low/middle into the current high-end arena, but then where is the
high-end product development?
... snip ... top of post, old email index
FS in 1st half of 70s was completely different and was to completely
replace 370s (internal politics during FS was killing off 370 efforts,
the lack of new 370s during FS is credited with giving the clone 370
system makers, including Amdahl, their market foothold). When FS
implodes, there was mad rush to get stuff back into the 370 product
pipelines, including kicking off quick&dirty 3033&3081 in
parallel (3033 started out 168 logic remapped to 20% faster
chips). Once the 3033 was out the door, the processor engineers start
on trout/3090.
http://www.jfsowa.com/computer/memo125.htm
back to the 60s, Amdahl wins case that ACS should be 360 compatible,
end of ACS/360 (folklore is that was canceled because it was felt the
start-of-art would be advanced too fast and IBM would loose control of
the market) ... Amdahl leaves IBM shortly later. Following includes
ACS/360 features that show-up more than 20yrs later with ES/9000.
https://people.computing.clemson.edu/~mark/acs_end.html
posts mentioning gettting to play disk engineer in bldg14&15 (disk
engineering and product test)
https://www.garlic.com/~lynn/subtopic.html#disk
CKD, FBA, multi-track seach, etc posts
https://www.garlic.com/~lynn/submain.html#dasd
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
online compuer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Edson and Bullying Date: 11 Sep, 2024 Blog: Facebookco-worker at IBM cambridge science through the 70s and then we transferred to san jose research in 1977 (he passed aug2020) was responsible for the internal network which was larger than arpanet/internet from just about the beginning until sometime mid/late 80s ... book about being bullied as a child: brutal US culture of bullying, stamping out creativity and enforcing conformity, "It's Cool to Be Clever: The Story of Edson C. Hendricks, the Genius Who Invented the Design for the Internet"
Edson
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip .....
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
posts mentioning Edson and bullying
https://www.garlic.com/~lynn/2023c.html#27 What Does School Teach Children?
https://www.garlic.com/~lynn/2023b.html#72 Schooling Was for the Industrial Era, Unschooling Is for the Future
https://www.garlic.com/~lynn/2022g.html#13 The Nazification of American Education
https://www.garlic.com/~lynn/2022f.html#67 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022d.html#16 My Story: How I Was "Groomed" by My Elementary School Teachers
https://www.garlic.com/~lynn/2022c.html#82 We Have a Creativity Problem
https://www.garlic.com/~lynn/2021k.html#100 What Industrial Societies Get Wrong About Childhood
https://www.garlic.com/~lynn/2021f.html#99 IQ tests can't measure it, but 'cognitive flexibility' is key to learning and creativity
https://www.garlic.com/~lynn/2021c.html#78 Air Force opens first Montessori Officer Training School
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#103 IBM Innovation
https://www.garlic.com/~lynn/2021b.html#87 IBM Innovation
https://www.garlic.com/~lynn/2021b.html#86 Dail-up banking and the Internet
https://www.garlic.com/~lynn/2021b.html#54 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021.html#11 IBM PCjr
https://www.garlic.com/~lynn/2020.html#0 The modern education system was designed to teach future factory workers to be "punctual, docile, and sober"
https://www.garlic.com/~lynn/2019e.html#3 The One Type of Game That Kills Creativity and Innovation
https://www.garlic.com/~lynn/2019c.html#67 Range
https://www.garlic.com/~lynn/2018d.html#111 The story of the internet is all about layers; How the internet lost its decentralized innocence
https://www.garlic.com/~lynn/2018d.html#106 Everyone is born creative, but it is educated out of us at school
https://www.garlic.com/~lynn/2018c.html#73 Army researchers find the best cyber teams are antisocial cyber teams
https://www.garlic.com/~lynn/2018b.html#46 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2018b.html#45 More Guns Do Not Stop More Crimes, Evidence Shows
https://www.garlic.com/~lynn/2017i.html#38 Bullying trivia
https://www.garlic.com/~lynn/2017h.html#84 Bureaucracy
https://www.garlic.com/~lynn/2017e.html#20 cultural stereotypes, was Ironic old "fortune"
https://www.garlic.com/~lynn/2017c.html#43 Formed by Megafloods, This Place Fooled Scientists for Decades
https://www.garlic.com/~lynn/2016f.html#13 Bullying
https://www.garlic.com/~lynn/2016e.html#53 E.R. Burroughs
https://www.garlic.com/~lynn/2016d.html#8 What Does School Really Teach Children
https://www.garlic.com/~lynn/2015g.html#99 PROFS & GML
https://www.garlic.com/~lynn/2015g.html#80 Term "Open Systems" (as Sometimes Currently Used) is Dead -- Who's with Me?
https://www.garlic.com/~lynn/2015c.html#98 VNET 1983 IBM
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Viewgraph, Transparency, Viewfoil Date: 11 Sep, 2024 Blog: Facebookfrom IBM Jargon:
Transparency (projection)
https://en.wikipedia.org/wiki/Transparency_(projection)
Overhead projector
https://en.wikipedia.org/wiki/Overhead_projector
The use of transparent sheets for overhead projection, called
viewfoils or viewgraphs, was largely developed in the United
States. Overhead projectors were introduced into U.S. military
training during World War II as early as 1940 and were quickly being
taken up by tertiary educators,[14] and within the decade they were
being used in corporations.[15] After the war they were used at
schools like the U.S. Military Academy.[13] The journal Higher
Education of April 1952 noted;
Nick Donofrio stopped by and my wife showed him five hand drawn charts
for project, and he approves it; originally HA/6000 for NYTimes to
move their newspaper system (ATEX) off VAXcluster to RS/6000. I rename
it HA/CMP when started doing technical/scientific cluster scale-up
with national labs and commercial cluster scale-up with RDBMS vendors
(Oracle, Sybase, Informix, Ingres) ... 16-way/systems by mid92 and
128-system/systems by ye92 ... mainframe was complaining it would be far
ahead of them (end-Jan92, it gets transferred for announce as IBM
supercomputer, for technical/scientific *ONLY*, and we were told we
couldn't work on anything with more than four processors).
My wife had I did long EU marketing trip, each of us 3-4 HA/CMP marketing presentations a day and then move on to next European city ... finally ending in Cannes ... practicing our Oracle World marketing presentations
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Viewgraph, Transparency, Viewfoil Date: 11 Sep, 2024 Blog: Facebookre:
... trivia history: some of the MIT CTSS/7094 people went to the 5th flr to do MULTICS and others went to science center on 4th flr for virtual machines, internal network, lots of performance and online apps. CTSS RUNOFF was rewritten for CP67/CMS as SCRIPT, then in 1969, GML (for three inventor names) was invented at science center and GML tag processing added to SCRIPT
last half 70s, some of us transfer from science center out to San Jose.
early ("GML") foils were printed on 6670 or 3800 laser printers in "large" font on paper ... and then run through a transparency copier. From long ago and far away:
:frontm. :titlep. :title.GML for Foils :date.August 24, 1984 :author.xxx1 :author.xxx2 :author.xxx3 :author.xxx4 :address. :aline.T.J. Watson Research Center :aline.P.O. Box 218 :aline.Yorktown Heights, New York :aline.&rbl. :aline.San Jose Research Lab :aline.5600 Cottle Road :aline.San Jose, California :eaddress. :etitlep. :logo. :preface. :p.This manual describes a method of producing foils automatically using DCF Release 3 or SCRIPT3I. The foil package will run with the following GML implementations: :ul. :li.ISIL 3.0 :li.GML Starter Set, Release 3 :eul. :note.This package is an :q.export:eq. version of the foil support available at Yorktown and San Jose Research as part of our floor GML. Yorktown users should contact xxx4 for local documentation. Documentation for San Jose users is available in the document stockroom.
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 370/125, Future System, 370-138/148 Date: 11 Sep, 2024 Blog: FacebookEarly 70s, I was asked to help get VM370 up and running on 256kbyte 370/125 for European shipping company with offices in downtown NYC. I had done a lot rewriting CP67 as undergraduate in the 60s, including doing things to reduce fixed real storage requirements for 256kbyte 360/67. First 125 problem was it wouldn't boot, problem was 115/125 implemented CLCL/MVCL wrong. All 360 instructions pre-checked starting and ending parameter addresses for valid; 370 CLCL/MVCL were suppose to incrementally execute w/o prechecking ending address ... but addresses as they were incrementally executed. VM370 cleared storage and determined size of memory with MVCL with length of 16mbytes (when it got to end of storage and had program check, and determined failing addresses). 115/125 used 360 semantics and pre-checked for 16mbytes and immediately program checked, implying that there wasn't any memory). I then did a lot of bloat cutting to minimize fixed kernel size ... maximizing amount of 256kbytes available for paging.
Then there was Future System effort, completely different and was to
totally replace 370 (claim that the lack of new 370 during FS
contributed to giving clone 370 system makers their market foothold).
http://www.jfsowa.com/computer/memo125.htm
When FS implodes ... both 125-ii and 138/148 groups ask me to
help. For Endicott, they wanted the highest executed 6kbytes VM370
instruction kernel paths identified for redoing in microcode ... aka
"ECPS" (running at 10times faster) ... old archived post with initial
analysis (6kbytes accounted for 79.55% of kernel execution,
re-implemented in m'code running 10times faster)
https://www.garlic.com/~lynn/94.html#21
For 125-ii they wanted me to do 5-processor vm/370 support. Both 115 & 125 had nine position memory bus for microprocessors, all the 115 microprocessors were the same that ran at 800kips (for the dedicated integrated control units as well as 370 which 370 instructions ran at 80kips). The 125 was the same except microprocessor that ran 370 microcode, it was 1.2mips (and ran 370 at 120kips). They wanted me to do VM370 to run 125 where five of the microprocessors ran 370 microcode (600kips aggregate, plus most of the ECPS being done for 138/148). Endicott objected because 125 600kips would overlap the 138&148 ... and got my 125 effort shutdown (in escalation meeting I had to sit on both sides of the table and argue both cases).
Then Endicott wanted to ship every 138/148 with ECPS & VM370 pre-installed ... but POK objected, POK was also in the process of convincing corporate to kill VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (Endicott eventually manages to save the VM370 product mission, but had to recreate a development group from scratch, but couldn't reverse the corporate decision about allowing every 138/148 with ECPS & VM370 pre-installed, sort-of like LPAR-PR/SM). Endicott then talks me into running around the world presenting ECPS case to planners/forecasters
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
360 &/or 370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
SMP, tightly-coupled, shared-memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
125 5-processor SMP posts
https://www.garlic.com/~lynn/submain.html#bounce
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Seastar and Iceberg Date: 12 Sep, 2024 Blog: Facebookre:
Note: 1972, CEO Learson tried (and failed) to block bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy, it was
greatly accelerated by the Future System effort, Ferguson & Morris,
"Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive
... snip ...
... 20yrs later, IBM has one of the worst losses in the history of US
companies and was being reorganized into the 13 "baby blues" (take off
on AT&T "baby bells" breakup a decade earlier), in preparation for the
IBM breakup
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup of the company. Before we get
started, the board brings in the former president of AMEX that
(somewhat) reverses the breakup.
more:
http://www.jfsowa.com/computer/memo125.htm
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: what's a mainframe, was is Vax addressing sane today Newsgroups: comp.arch Date: Fri, 13 Sep 2024 08:45:46 -1000John Levine <johnl@taugh.com> writes:
They also claimed that the main difference between 360/195 and 370/195 was introduction of ("370") hardware retry (masking all sorts of transient hardware errors). Some vague recall mention that 360/195 mean time between some hardware check was three hrs (combination of number of circuits and how fast they were running).
Then decision was made to add virtual memory to all 370s and it was decided that the difficulty in adding virtual memory to 370/195 wasn't justified ... and all new work on machine was dropped.
Account of end of ACS/360 ... Amdahl had won the battle to make ACS,
360 compatible ... but folklore is then executives were afraid that it
would advance state-of-the-art too fast and IBM would loose control of
the market ... includes some references to multithreading
patents&disclosures.
https://people.computing.clemson.edu/~mark/acs_end.html
also mentions some of ACS/360 features show up more than 20yrs later with ES/9000.
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
some past posts mentioning end of ACS/360, 370/195, multithread, add
virtual memory to all 370s
https://www.garlic.com/~lynn/2024d.html#101 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#66 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024c.html#20 IBM Millicode
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023e.html#100 CP/67, VM/370, VM/SP, VM/XA
https://www.garlic.com/~lynn/2023d.html#32 IBM 370/195
https://www.garlic.com/~lynn/2023d.html#20 IBM 360/195
https://www.garlic.com/~lynn/2022d.html#34 Retrotechtacular: The IBM System/360 Remembered
https://www.garlic.com/~lynn/2022b.html#51 IBM History
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: what's a mainframe, was is Vax addressing sane today Newsgroups: comp.arch Date: Fri, 13 Sep 2024 09:05:33 -1000John Levine <johnl@taugh.com> writes:
1st part of 70s, IBM had FS effort, was totally different than 370
and was to completely replace 370 (internal politics was killing
off 370 efforts).
http://www.jfsowa.com/computer/memo125.htm
when FS finally implodes, there is a mad rush to get stuff back into the 370 product pipelines, including kicking off 3033&3081 efforts in parallel.
I got sucked into work on 16-processor 370 SMP and we con the 3033 processor engineers into working on it in their spare time (lot more interesting that remapping 168 logic for 20% faster chips). Everybody thot it was great until somebody tells the head of POK lab that it could be decades before the POK favorite son operating system (batch MVS) has (effective) 16-processor support (at the time MVS documentation claimed that its 2-processor throughput had 1.2-1.5 times the throughput of single process). The head of POK then invites some of us to never visit POK again and the 3033 processor engineers, heads down and no distractions (although I was invited to sneak back into POK to work with them). POK doesn't ship a 16-processor machine until after the turn of the century, more than two decades later.
Once the 3033 was out the door, the processor engineers start on trout/3090. When vector was announced they complained about it being purely marketing stunt ... that they had so speeded up 3090 scalar that it ran at memory bus saturation (and vector would unlikely make throughput much better).
I had also started pontificating the relative disk system throughput had gotten an order of magnitude slower (disks got 3-5 times faster while systems got 40-50 times faster) since 360 announce. Disk division executive took exception and directed division performance group to refute the claims, after a couple weeks they came back and said I had slightly understated the problem. They respun the analysis on how to configure disks to improve system throughput for a user group presentation (16Aug1984, SHARE 63, B874).
I was doing some work with disk engineers and that they had been directed to use a very slow processor for the 3880 disk controller follow-on to the 3830 ... while it handled 3mbyte/sec 3380 disks, it otherwise seriously drove up channel busy. 3090 originally assumed that 3880 would be like previous 3830 but with 3mbyte/sec transfer ... when they found out how bad things actually was, they realized they would have to seriously increase the number of (3mbyte/sec) channels (to achieve target throughput). Marketing then respins the significant increase in channels as being wonderful I/O machine. Trivia: the increase in channels required an extra TCM and the 3090 group semi-facetiously claimed they would bill the 3880 group for increase in 3090 manufacturing cost.
I was also doing some work with Clementi
https://en.wikipedia.org/wiki/Enrico_Clementi
E&S lab in IBM Kingston ... had boatload of Floating Point Systems boxes
https://en.wikipedia.org/wiki/Floating_Point_Systems
that had 40mbyte/sec disk arrays for keeping the FPS boxes fed.
In 1980, I had been con'ed into doing channel-extender implementation for IBM STL (since renamed SVL), they were moving 300 people and 3270 terminal to offsite bldg with dataprocessing back to STL datacenter. They had tried "remote 3270" but found human factors unacceptable. Channel-extender allowed "channel-attached" 3270 controllers to be place at offsite bldg with no human factors difference between offsite and in STL. Side-effect was that it increased system throughput by 10-15%. They had previously spread 3270 controllers across all the same channels with disks, the channel-extender work significantly reduced 3270 terminal I/O channel busy, increasing disk I/O and system throughput (they were considering moving all 3270 controllers to channel-extender, even those physically inside STL. Then there was attempt to my support released to customers, but there was group in POK playing with some serial stuff that get it vetoed, they were afraid if it was in the market, it would make it harder to release their stuff.
In 1988, the IBM branch office asks if I could help LLNL get some serial stuff they were playing with, standardized ... which quickly becomes fibre-channel standard ("FCS", initially 1gibt/sec, full-duplex, aggregate 200mbyte/sec). The POK serial stuff finally gets released in the 90s with ES/9000 as ESCON (when it is already obsolete, 17mbytes/sec). Then some POK engineers become involved with FCS and define a heavy-weight protocol that significantly reduces throughput, eventually released as FICON. The latest, public benchmark I've found is z196 "Peak I/O", getting 2M IOPS using 104 FICON. About the same time, a FCS is announced for E5-2600 blade claiming over million IOPS (two such FCS has higher throughput than 104 FICON). Also IBM docs had SAPs (system assist processors that do actual I/O) kept to 70% cpu (more like 1.5M IOPS), also no IBM CKD DASD have been made for decades all being simulated on industry standard fixed-block disks.
Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FCS &/or FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
some recent posts mentioning 16-processor 370 effort
https://www.garlic.com/~lynn/2024e.html#106 IBM 801/RISC
https://www.garlic.com/~lynn/2024e.html#100 360, 370, post-370, multiprocessor
https://www.garlic.com/~lynn/2024e.html#83 Scheduler
https://www.garlic.com/~lynn/2024e.html#65 Amdahl
https://www.garlic.com/~lynn/2024e.html#18 IBM Downfall and Make-over
https://www.garlic.com/~lynn/2024d.html#113 ... some 3090 and a little 3081
https://www.garlic.com/~lynn/2024d.html#100 Chipsandcheese article on the CDC6600
https://www.garlic.com/~lynn/2024d.html#62 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#10 Benchmarking and Testing
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#119 Financial/ATM Processing
https://www.garlic.com/~lynn/2024c.html#100 IBM 4300
https://www.garlic.com/~lynn/2024c.html#51 third system syndrome, interactive use, The Design of Design
https://www.garlic.com/~lynn/2024c.html#15 360&370 Unix (and other history)
https://www.garlic.com/~lynn/2024c.html#11 370 Multiprocessor
https://www.garlic.com/~lynn/2024b.html#61 Vintage MVS
https://www.garlic.com/~lynn/2024.html#36 RS/6000 Mainframe
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: what's a mainframe, was is Vax addressing sane today Newsgroups: comp.arch Date: Fri, 13 Sep 2024 09:54:45 -1000Terje Mathisen <terje.mathisen@tmsw.no> writes:
late 80s, get HA/6000 project, originally for NYTimes to move their
newspaper system (ATEX) off VAXCluster to RS/6000. I then rename it
HA/CMP when I start doing technical/scientific scale-up with national
labs and commercial scale-up with RDBMS vendors (Oracle, Sybase,
Informix, Ingres) that had VAXCluster support in same source base with
Unix (I do distributed lock manager that supported VAXCluster semantics
to ease ports).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
IBM had been marketing S/88, rebranded fault tolerant. Then the S/88
product administer starts taking us around to their customers.
https://en.wikipedia.org/wiki/Stratus_Technologies
Also has me write a section for the corporate continuous availability
strategy document ... however, it gets pulled when both Rochester
(AS/400, I-systems) and POK (mainframe) complain that they couldn't meet
the requirements.
Early Jan92 in meeting with Oracle CEO, AWD/Hester tells Ellison that we would have 16processor clusters by mid92 and 128processor clusters by ye92. Within a couple weeks (end jan92), cluster scale-up is transferred for announce as IBM Supercomputer (scientific/technical *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: big, fast, etc, was is Vax addressing sane today Newsgroups: comp.arch Date: Fri, 13 Sep 2024 10:38:40 -1000John Levine <johnl@taugh.com> writes:
after leaving IBM was brought into largest airline res system to look ten impossible things they can't do. Got started with "ROUTES" (about 25% of the mainframe workload), they gave me a full softcopy of OAG (all scheduled commercial flt segments in the world) ... couple weeks later came back with ROUTES that implemented their impossible things. Mainframe had tech trade-offs from the 60s and started from scratch could make totally different tech trade-offs, initially ran 100 times faster, then implementing the impossible stuff and still ran ten times faster (than their mainframe systems). Showed that ten rs6000/990 could handle workload for every flt and every airline in the world.
Part of the issue was that they extensively massaged the data on a mainframe MVS/IMS system and then in sunday night, rebuilt the mainframe "TPF" (limited datamanagement services) system from the MVS/IMS system. That was all eliminated.
Fare search was harder because it started being "tuned" by some real time factors.
Could move all to RS/6000 - HA/CMP. Then some very non-technical issues kicked-in (like large staff involved in the data massaging). trivia: I had done a bunch of slight of hand for HA/CMP RDBMS distributed lock manager scaleup for 128-system clusters.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
some posts mentioning airline res system, routes, oag
https://www.garlic.com/~lynn/2024e.html#92 IBM TPF
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#93 PC370
https://www.garlic.com/~lynn/2024.html#122 Assembler language and code optimization
https://www.garlic.com/~lynn/2023g.html#90 Has anybody worked on SABRE for American Airlines
https://www.garlic.com/~lynn/2023g.html#74 MVS/TSO and VM370/CMS Interactive Response
https://www.garlic.com/~lynn/2023g.html#41 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#13 Vintage Future System
https://www.garlic.com/~lynn/2023d.html#80 Airline Reservation System
https://www.garlic.com/~lynn/2023.html#96 Mainframe Assembler
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022b.html#18 Channel I/O
https://www.garlic.com/~lynn/2021k.html#37 Why Sabre is betting against multi-cloud
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#76 IBM ITPS
https://www.garlic.com/~lynn/2021f.html#8 Air Traffic System
https://www.garlic.com/~lynn/2021b.html#6 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2017k.html#63 SABRE after the 7090
https://www.garlic.com/~lynn/2016f.html#109 Airlines Reservation Systems
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015f.html#5 Can you have a robust IT system that needs experts to run it?
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2011c.html#42 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010n.html#81 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2009o.html#42 Outsourcing your Computer Center to IBM ?
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins
some posts mentioning distributed lock manager
https://www.garlic.com/~lynn/2024.html#82 Benchmarks
https://www.garlic.com/~lynn/2024.html#71 IBM AIX
https://www.garlic.com/~lynn/2023e.html#86 Relational RDBMS
https://www.garlic.com/~lynn/2023e.html#79 microcomputers, minicomputers, mainframes, supercomputers
https://www.garlic.com/~lynn/2017b.html#82 The ICL 2900
https://www.garlic.com/~lynn/2014.html#73 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013o.html#44 the suckage of MS-DOS, was Re: 'Free Unix!
https://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
https://www.garlic.com/~lynn/2013m.html#87 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
https://www.garlic.com/~lynn/2013m.html#86 'Free Unix!': The world-changing proclamation made 30 yearsagotoday
https://www.garlic.com/~lynn/2011f.html#8 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2010k.html#54 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
https://www.garlic.com/~lynn/2009k.html#36 Ingres claims massive database performance boost
https://www.garlic.com/~lynn/2009h.html#26 Natural keys vs Aritficial Keys
https://www.garlic.com/~lynn/2009b.html#40 "Larrabee" GPU design question
https://www.garlic.com/~lynn/2008i.html#18 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2007c.html#42 Keep VM 24X7 365 days
https://www.garlic.com/~lynn/2006o.html#32 When Does Folklore Begin???
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/aadsm26.htm#17 Changing the Mantra -- RFC 4732 on rethinking DOS
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM PC/RT AIX Date: 14 Sep, 2024 Blog: FacebookFirst half of 1970s, there was "Future System" effort, completely different and going to completely replace 370; during FS, internal politics was killing off 370 activity (claim is that the lack of new 370 during FS is what gave the clone 370 makers, their market foothold). Then when FS finally implodes there is mad rush to get stuff back into the 370 product pipelines, including kicking off quick&dirty 3033&3081 efforts in parallel ... some more details:
I get sucked into helping with an effort to do 16-processor 370 SMP (shared memory multiprocessor). 1976, there is an "advanced technology" conference in POK where both 801/RISC and 16-processor is presented (one of the 801/RISC people gives me a bad time claiming he had looked at the VM370 product code which had no SMP support, aka in the initial morph of CP67->VM370 there was lots of stuff simplified and/or dropped, including SMP support). I've observed that it was the last adtech conference until sometime in the 80s (because so many adtech groups were being thrown into the 370 development breach). I had joked that John came up with 801/RISC to be the opposite of the complexity of "Future System".
Note: when I joined IBM, one of my hobbies was enhanced production operating systems for internal datacenters (& HONE was long-time customer) and I was moving a bunch of stuff from CP67->VM370, just recently finished SMP support, originally for the US online sales&marketing US HONE complex in Palo Alto ... to add 2nd processor to each system in their large eight system, single-system-image, loosely-coupled, shared DASD operation, that had load-balancing and fall-over (each two processor system then getting twice the throughput of single processor).
Everybody thought the 16-way SMP was really great until somebody told head of POK that it could be decades before POK's favorite son operating system (MVS) had (effective) 16-processor support (i.e. the MVS docs at the time had 2=processor systems getting 1.2-1.5 times throughput of single processor). Then head of POK invites some of us to never visit POK again (note POK doesn't ship 16-way SMP until after turn of century, also head of POK was in the process of convincing conrporate to kill VM370 product, shutdown the development group and transfer all the people to POK to work on MVS/XA, Endicott did eventually acquire the VM370 product mission for mid-range, but had to recreate a development group from scratch).
Other trivia: early 70s, CEO Learson had tried (& failed) to block the
bureaucrats, careerists, and MBAs from destroying the Watson
culture/legacy ... FS had accelerated it, Ferguson & Morris, "Computer
Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
... 20yrs later, IBM has one of the worst losses in the history of US
companies and was being reorganized into the 13 "baby blues" (take off
on AT&T "baby bells" breakup a decade earlier), in preparation for the
breakup
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the breakup of the company. Before we get
started, the board brings in the former president of AMEX that
(somewhat) reverses the breakup.
Along the way, IBM did have a project to replace a wide range of different CISC microprocessors with 801/RISC, controllers, mid-range 370, S/38 followon ... for various reasons they floundered and returned to doing CISC. ROMP (801/RISC, research, OPD) was going to be used for Displaywriter followon. When that got canceled (market was moving to personal computers), it was decided to pivot to unix workstation market ... getting the company that had done the AT&T Unix port to IBM/PC, to do one for ROMP ... result becomes PC/RT and AIX. IBM Palo Alto was also in the process of doing UCB BSD UNIX port to 370 and got redirected to PC/RT ... which ships as "AOS" (alternative to AIX). Then the follow-on to ROMP was RIOS for RS/6000 (and AIXV3 where they merge in a lot of BSD'isms)
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
csc/vm posts
https://www.garlic.com/~lynn/submisc.html#cscvm
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, tightly-coupled, shared memory multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: big, fast, etc, was is Vax addressing sane today Newsgroups: comp.arch Date: Sat, 14 Sep 2024 10:57:57 -1000John Levine <johnl@taugh.com> writes:
card associations were originally to promote brand acceptance/uptake/advertising and network interconnecting the acquiring/merchant card transaction processors with the issuing/consumer card transaction processors (issuer processor doing the real-time authorization/"auth" transaction).
late 90s, internet/micropayments was looking at card transaction processors being able to handle micropayments ... but required singnificantly higher transaction rate than card processors were capable off. They turn to cellphone operations that were using "in-memory" DBMS capable of ten times the transaction rate (that card processors were doing).
Some of the cellphone companies were enticed to get into micropayments but got out after a few years, turns out they lacked the significant fraud handling capability (they were absorbing cellphone calling fraud because it was their own resources, but in case of micropayments fraud, it involved actually transferring real money to other entities).
As an side, card association interconnect network was flavor of VAN (value added networks) that was prevalent at the time, but were in the process of of being obsoleted by the internet. As an side, at the turn of the century, 90% of all acquiring&issuing card transactions in the US, were being handled by six datacenters having their own private, dedicated, non-association interconnect ... big litigation between card associations and those processors (the card association network had been charging fee for each transaction that flowed through their network, and association still wanted that fee paid them whether or not the transaction actually flowed their network.
a couple posts mentioning micropayments, in-memory dbms, fraud
https://www.garlic.com/~lynn/2010i.html#46 SAP recovers a secret for keeping data safer than the standard relational database
https://www.garlic.com/~lynn/aadsm28.htm#32 How does the smart telco deal with the bounty in its hands?
other posts mentioning card association, value added network, internet
https://www.garlic.com/~lynn/2018e.html#5 Computers, anyone?
https://www.garlic.com/~lynn/2017h.html#70 the 'Here is' key
https://www.garlic.com/~lynn/2015f.html#12 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015f.html#11 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2015d.html#30 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014l.html#67 LA Times commentary: roll out "smart" credit cards to deter fraud
https://www.garlic.com/~lynn/2009q.html#75 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2004i.html#18 New Method for Authenticated Public Key Exchange without Digital Certificates
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
risk, fraud, exploits, threats, vulnerabilities posts
https://www.garlic.com/~lynn/subintegrity.html#fraud
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM PC/RT AIX Date: 14 Sep, 2024 Blog: Facebookre:
late 80s, my wife presents HA/6000 project to Nick Donofrio and he
approves it; originally to move NYTimes newspaper system (ATEX) off
DEC VAXCluster to RS/6000. I then rename it HA/CMP when start doing
technical/scientific cluster scale-up with national labs and
commercial cluster scale-up with RDBMS vendors (Oracle, Sybase,
Informix, Ingres) that have VAXCluster support in same source base
with unix.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
IBM had been logoed/selling another vendors system as S/88, then the S/88 product administer starts taking us around to their customers ... and also gets me to write a section for the corporate continuous availability strategy document (it gets pulled when both Rochester/AS400 and POK/mainframe complain they couldn't meet the requirements). Early JAN1992, meeting with Oracle CEO, AWD/Hester tells them we could have 16-system clusters by mid92 and 128-system clusters by ye92. Then end of JAN1992, cluster scale-up is transferred for announce as IBM Supercomputer (technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors (we leave IBM a few months later).
trivia: AS/400 (followon to S/3# systems) was originally to be
801/RISC ... but dropped back to CISC.
https://en.wikipedia.org/wiki/IBM_AS/400#Fort_Knox
Then in the 90s, the executive we originally reported to, doing HA/CMP
... goes over to headup SOMERSET (AIM, apple, ibm, motorola) to do
single chip RISC, power/pc (including picking up some stuff from
motorola 88K, including multiprocessor and cache consistency).
https://en.wikipedia.org/wiki/PowerPC
https://en.wikipedia.org/wiki/PowerPC_600
https://wiki.preterhuman.net/The_Somerset_Design_Center
which then is also used for newer Rochester AS/400
https://en.wikipedia.org/wiki/IBM_AS/400#The_move_to_PowerPC
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Retirement Date: 14 Sep, 2024 Blog: FacebookAMEX was in competition with KKR for private equity (LBO) take-over of RJR and KKR wins. then KKR was having trouble with RJR and hires away the AMEX president to help.
Note: Two decades earlier, CEO Learson tried (and failed) to block the
bureaucrats, careerists, and MBAs from destroying Watson
culture/legacy ... which was greatly accelerated during the "Future
System" project; Ferguson & Morris, "Computer Wars: The Post-IBM
World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive
... snip ...
more detail:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
former amex president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension plan posts
https://www.garlic.com/~lynn/submisc.html#pension
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: big, fast, etc, was is Vax addressing sane today Newsgroups: comp.arch Date: Sat, 14 Sep 2024 17:00:59 -1000re:
before and after turn of century we would periodically have threads on the "bank fraud blame game" (in financial industry mailing lists); interchange fees that financial institutions charge merchants is base plus fraud surcharge ... adjusted for the fraud rate for kind of transactions .... internet transactions can have highest surcharge (with many banks' profit from fraud surcharge reaching major percentage of their bottom line)
right after turn of century, several "safe" transaction products were presented to major online merchants (representing 80% of total internet payment transactions) which saw high acceptance ... expecting that the fraud surcharge would be eliminated. Then the cognitive dissonance set in, they were told that instead of eliminating the fraud surchanged, a new large "safe" surcharge would be added on top of the existing fraud surchange ... and all the interest evaporated.
I had co-authored financial industry transaction protocols as well as
done "safe" transaction chip design (that was one of the "safe"
products) ... was one of panel giving talk at standing room only large
ballroom, semi-facetiously saying I was taking $500 milspec chip and
aggresively cost reducing by more than two orders of magnitude while
increasing its security:
https://csrc.nist.gov/pubs/conference/1998/10/08/proceedings-of-the-21st-nissc-1998/final
got prototype chips after turn of the century and gave talk in
assurance panal in the trusted computing tract at 2001 IDF
https://web.archive.org/web/20011109072807/http://www.intel94.com/idf/spr2001/sessiondescription.asp?id=stp%2bs13
the guy running trusted-computing TPM chip was in front row and I
chided him that it was nice to see his chip was starting to look more
like mine; his response was that I didn't have a committee of 200
people helping me with design.
a few posts mentioning AADS chip
https://www.garlic.com/~lynn/2022e.html#84 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022b.html#108 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2015f.html#20 Credit card fraud solution coming to America...finally
https://www.garlic.com/~lynn/2004d.html#7 Digital Signature Standards
https://www.garlic.com/~lynn/2003l.html#61 Can you use ECC to produce digital signatures? It doesn't see
https://www.garlic.com/~lynn/2002g.html#38 Why is DSA so complicated?
https://www.garlic.com/~lynn/aadsm12.htm#19 TCPA not virtualizable during ownership change (Re: Overcoming the potential downside of TCPA)
more AADS Chip
https://www.garlic.com/~lynn/x959.html#aads
fraud posts
https://www.garlic.com/~lynn/subintegrity.html#fraud
assurance posts
https://www.garlic.com/~lynn/subintegrity.html#assurance
payment posts
https://www.garlic.com/~lynn/subintegrity.html#payments
a few post mentioning bank fraud blame game
https://www.garlic.com/~lynn/aadsm28.htm#81 not crypto, but fraud detection
https://www.garlic.com/~lynn/aadsm28.htm#41 Trojan with Everything, To Go!
https://www.garlic.com/~lynn/aadsm28.htm#18 Lack of fraud reporting paths considered harmful
https://www.garlic.com/~lynn/aadsm27.htm#58 On the downside of the MBA-equiped CSO
https://www.garlic.com/~lynn/aadsm27.htm#52 more on firing your MBA-less CSO
https://www.garlic.com/~lynn/aadsm27.htm#50 If your CSO lacks an MBA, fire one of you
https://www.garlic.com/~lynn/aadsm27.htm#45 Threatwatch: how much to MITM, how quickly, how much lost
https://www.garlic.com/~lynn/aadsm27.htm#44 Threatwatch: how much to MITM, how quickly, how much lost
https://www.garlic.com/~lynn/aadsm27.htm#43 a fraud is a sale, Re: The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#42 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#41 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#40 a fraud is a sale, Re: The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#39 a fraud is a sale, Re: The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#38 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#37 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#35 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#34 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#33 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#32 The bank fraud blame game
https://www.garlic.com/~lynn/aadsm27.htm#31 The bank fraud blame game
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM - Making The World Work Better Date: 15 Sep, 2024 Blog: FacebookMaking The World Work Better: The Ideas That Shaped a Century and a Company
Note: 1972, CEO Learson tried (and failed) to block the bureaucrats,
careerists, and MBAs from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
... which was greatly accelerated during the "Future System" project; Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and
*MAKE NO WAVES* under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat ... But because of the
heavy investment of face by the top management, F/S took years to
kill, although its wrong headedness was obvious from the very
outset. "For the first time, during F/S, outspoken criticism became
politically dangerous," recalls a former top executive
... snip ...
Two decades later, IBM has one of the largest losses in the history of
US companies and was being reorganized into the 13 "baby blues" (take
off on AT&T "baby bells" breakup decade earlier) in preparation
for breaking up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the company breakup. Before we get
started, the board brings in the former president of Amex as CEO, who
(somewhat) reverses the breakup ... and uses some of the techniques
used at RJR (ref gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
IBM becoming a financial engineering company, Stockman; The Great
Deformation: The Corruption of Capitalism in America
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/
pg464/loc9995-10000:
IBM was not the born-again growth machine trumpeted by the mob of Wall
Street momo traders. It was actually a stock buyback
contraption on steroids. During the five years ending in fiscal 2011,
the company spent a staggering $67 billion repurchasing its own
shares, a figure that was equal to 100 percent of its net income.
pg465/loc10014-17:
Total shareholder distributions, including dividends, amounted to $82
billion, or 122 percent, of net income over this five-year
period. Likewise, during the last five years IBM spent less on capital
investment than its depreciation and amortization charges, and also
shrank its constant dollar spending for research and development by
nearly 2 percent annually.
... snip ...
(2013) New IBM Buyback Plan Is For Over 10 Percent Of Its Stock
http://247wallst.com/technology-3/2013/10/29/new-ibm-buyback-plan-is-for-over-10-percent-of-its-stock/
(2014) IBM Asian Revenues Crash, Adjusted Earnings Beat On Tax Rate Fudge; Debt Rises 20% To Fund Stock Buybacks (gone behind paywall)
https://web.archive.org/web/20140201174151/http://www.zerohedge.com/news/2014-01-21/ibm-asian-revenues-crash-adjusted-earnings-beat-tax-rate-fudge-debt-rises-20-fund-st
The company has represented that its dividends and share repurchases
have come to a total of over $159 billion since 2000.
... snip ...
(2016) After Forking Out $110 Billion on Stock Buybacks, IBM
Shifts Its Spending Focus
https://www.fool.com/investing/general/2016/04/25/after-forking-out-110-billion-on-stock-buybacks-ib.aspx
(2018) ... still doing buybacks ... but will (now?, finally?, a
little?) shift focus needing it for redhat purchase.
https://www.bloomberg.com/news/articles/2018-10-30/ibm-to-buy-back-up-to-4-billion-of-its-own-shares
(2019) IBM Tumbles After Reporting Worst Revenue In 17 Years As Cloud
Hits Air Pocket (gone behind paywall)
https://web.archive.org/web/20190417002701/https://www.zerohedge.com/news/2019-04-16/ibm-tumbles-after-reporting-worst-revenue-17-years-cloud-hits-air-pocket
more financial engineering
IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims. Lawsuit accuses Big Blue of cheating investors by shifting systems revenue to trendy cloud, mobile tech
https://www.theregister.com/2022/04/07/ibm_securities_lawsuit/
IBM has been sued by investors who claim the company under former CEO
Ginni Rometty propped up its stock price and deceived shareholders by
misclassifying revenues from its non-strategic mainframe business -
and moving said sales to its strategic business segments - in
violation of securities regulations.
... snip ...
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
former amex president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pension
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: ARPANET, Internet, Internal Network and DES Date: 15 Sep, 2024 Blog: FacebookIBM internal network was larger than ARPANET/Internet from just about the beginning until sometime mid/late 80s. One the 1Jan1983 great cutover to internetworking protocol, there were approx 100 IMPs and 255 HOSTs, while the internal network was rapidly approaching 1000. I've commented one of the things impacting arpanet expansion was the difficulty/limits getting IMPs ... while the internal network limit (besides having to be corporate installation) was requirement that all links be encrypted and periodic gov. resistance to encrypted links (especially when links crossed country boundaries).
In the early 80s, I got HSDT project, T1 (1.5mbits/sec) and faster computer links, both terrestrial and satellite (trivia: corporation had 2701 in 60s that supported T1 links, but in the mid-70s transition to SNA/VTAM, associated problems appeared to cap links at 56kbit, HSDT T1&higher brought conflicts with the IBM communication products group).
I really hated what I had to pay for T1 link encryptors (and faster were nearly impossible to find). I then become involved in link encryptors that would support 3mbytes/sec and cost less than $100 to build. At first, corporate crypto group claimed it seriously compromised DES standard. It took me three months to figure out how to convince corporate that instead of compromised, it was significantly stronger than the standard. It was hallow victory, was then told there is only one organization in the world that is allowed to use such crypto, I could make as many as I wanted, but they all had to be shipped to them. It was when I realized there were three kinds of crypto in the worlkd, 1) the kind they don't care about, 2) the kind you can't do, 3) the kind you can only do for them.
trivia: when I started HSDT, the top of the line IBM mainframe, two
processor 3081K, ran software DES standard at 150kbytes/sec (3081K as
link encryptor would require both processors dedicated to handle
single T1 full-duplex link, requiring two such dedicated 3081Ks, one
for each end). old archive email ref
https://www.garlic.com/~lynn/2006n.html#email841115
Internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
posts mentioning link encryptors, DES, and 3 kinds of crypto
https://www.garlic.com/~lynn/2024e.html#28 VMNETMAP
https://www.garlic.com/~lynn/2024d.html#75 Joe Biden Kicked Off the Encryption Wars
https://www.garlic.com/~lynn/2023f.html#79 Vintage Mainframe XT/370
https://www.garlic.com/~lynn/2022g.html#17 Early Internet
https://www.garlic.com/~lynn/2019e.html#86 5 milestones that created the internet, 50 years after the first network message
https://www.garlic.com/~lynn/2019b.html#100 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2017g.html#91 IBM Mainframe Ushers in New Era of Data Protection
https://www.garlic.com/~lynn/2017d.html#10 Encryp-xit: Europe will go all in for crypto backdoors in June
https://www.garlic.com/~lynn/2017c.html#69 ComputerWorld Says: Cobol plays major role in U.S. government breaches
https://www.garlic.com/~lynn/2017b.html#44 More on Mannix and the computer
https://www.garlic.com/~lynn/2015h.html#3 PROFS & GML
https://www.garlic.com/~lynn/2015c.html#85 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2014e.html#27 TCP/IP Might Have Been Secure From the Start If Not For the NSA
https://www.garlic.com/~lynn/2014e.html#25 Is there any MF shop using AWS service?
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014.html#9 NSA seeks to build quantum computer that could crack most types of encryption
https://www.garlic.com/~lynn/2013g.html#31 The Vindication of Barb
https://www.garlic.com/~lynn/2012k.html#47 T-carrier
https://www.garlic.com/~lynn/2012.html#63 Reject gmail
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 360/67 Blue Card Date: 16 Sep, 2024 Blog: FacebookWhen I joined IBM Cambridge Scientific Center
got a 360/67 "blue card" from one of the inventors of "GML" (@CSC in
1969) ... aka chose for letters of the three inventors' last names
https://en.wikipedia.org/wiki/IBM_Generalized_Markup_Language
trivia: MIT CTSS/7094 RUNOFF had been rewritten for cp67/cms as SCRIPT. After GML was invented, GML tag processor was added to SCRIPT.
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
some past 67 "blue card" posts
https://www.garlic.com/~lynn/2023f.html#114 Copyright Software
https://www.garlic.com/~lynn/2023d.html#121 Science Center, SCRIPT, GML, SGML, HTML, RSCS/VNET
https://www.garlic.com/~lynn/2023.html#67 IBM "Green Card"
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022b.html#86 IBM "Green Card"
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/99.html#11 Old Computers
https://www.garlic.com/~lynn/98.html#14 S/360 operating systems geneaology
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 360/67 Blue Card Date: 16 Sep, 2024 Blog: Facebookre:
trivia:
https://en.wikipedia.org/wiki/History_of_CP/CMS
As undergraduate, univ had hired me fulltime responsible for OS/360 (running on 360/67 as 360/65, originally acquired for TSS/360 but never came to fruition). Univ shutdown datacenter on weekends and I would have it dedicated (although 48hrs w/o sleep made monday classes hard).
Jan1968, CSC came out installed CP67 at the univ (3rd after CSC itself and MIT Lincoln Labs) and I mostly got to play with it during my weekend dedicated times, initial few months rewriting lots of CP67 for running OS/360 in virtual machine. My OS360 test job stream ran 322secs on bare machine, initially 856secs virtually (534secs CP67 CPU). Managed to get CP67 CPU down to 113secs.
Was then invited to be part of the public announce at May Houston SHARE meeting. CSC then hosted a one week CP67/CMS class on the west coast. I arrived on Sunday, and asked to teach the class, the CSC people that were suppose to teach it had resigned on Friday to join commercial CP67 (spinoff of CSC).
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
posts mentioning asked to teach class:
https://www.garlic.com/~lynn/2024b.html#17 IBM 5100
https://www.garlic.com/~lynn/2024.html#40 UNIX, MULTICS, CTSS, CSC, CP67
https://www.garlic.com/~lynn/2023c.html#88 More Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#82 Dataprocessing Career
https://www.garlic.com/~lynn/2023c.html#67 VM/370 3270 Terminal
https://www.garlic.com/~lynn/2023.html#57 Almost IBM class student
https://www.garlic.com/~lynn/2010g.html#68 What is the protocal for GMT offset in SMTP (e-mail) header
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Study says undocumented immigrants paid almost $100 billion in taxes Date: 17 Sep, 2024 Blog: FacebookStudy says undocumented immigrants paid almost $100 billion in taxes
In the 90s, congress asked GAO for studies on paying workers below
living wage ... GAO report found it cost (city/state/federal)
govs. avg $10K/worker/year .... basically worked out to an indirect
gov. subsidy to their employers. The interesting thing is that it has
been almost 30yrs since that report ... and have seen no congress this
century asking the GAO to update the study.
https://www.gao.gov/assets/hehs-95-133.pdf
The (2002) congress is also responsible for letting fiscal responsibility act lapse (couldn't spend more than tax revenue, on its way to eliminating all federal dept, courtesy of 90s congress). 2010 CBO report had 2003-2009, tax revenue cut was $6T and spending increased $6T, for $12T gap compared to fiscal responsibility budget. By 2005, the U.S. Comptroller General was including in speeches that nobody in congress was capable of middle school arithmetic (for what they were doing to the budget). Since then, taxes haven't been restored and only modest cuts in spending, so debt continues to increase.
All part of congress this century considered most corrupt institution on earth.
In 1992, AMEX spun off a lot of its dataprocessing in the largest IPO up until that time as FDC. FDC then looked at acquiring Western Union, but backed out because it was doing poorly ... however it later got WU anyway (which was still doing poorly) in merger with First Financial (had to divest MoneyGram as part of the deal) ... which was around same time-frame of the GAO study (that also looked at undocumented immigrants being paid less than minimum wage ... and how much they cost in gov. services, in effect significant more gov. subsidy to their employers). However, with the explosion in undocumented immigrants (being paid less than minimum wage) after the start of the century, by 2005, WU revenue had exploded to half of FDC's bottom line. Possibly in part because President of Mexico invited FDC executives to Mexico to be thrown in jail (for how much they were making off all the undocumented immigrants), FDC spun off WU. Claim is one of the reasons that the corrupt congress this century ignored the problem (including not asking for update of the GAO report) was because of heavy lobbying by industries that effectively benefit from the indirect gov. subsidies (being allowed to pay less than minimum & living wage)
The industry lobbying dominated ... turning blind eye to enormous
explosion in undocumented immigrants that occurred around the start of
the century and being paid less than minimum wage. There is book
written how (national) chamber of commerce became the center of it
... and it got so bad that local chapters started divorcing themselves
from the national organization.
https://www.amazon.com/dp/B00NDTUDHA/
Minimum wage has been used for different purposes at different times
... however recently this shows that minimum wage has not only not
kept pace with cost of living ... has actually declined in terms of
real dollars (sometimes story is being spun to obfuscate fundamental
issues)
http://bebusinessed.com/history/history-of-minimum-wage/
https://en.wikipedia.org/wiki/Minimum_wage_in_the_United_States
corresponds to this that real wages have remained flat since 1980
... while productivity has gone up significant (the increasing
difference being siphoned off).
http://www.nytimes.com/imagepages/2011/09/04/opinion/04reich-graphic.html
Even more was siphoned off with enormous explosion in undocumented immigrants last decade, being paid not only less than living wage ... but also less than minimum wage (demonstrated with the explosion in WU revenue between 2000 & 2005 from undocumented immigrants sending paychecks home) ... and then from 90s GAO report (not being updated is strong indication of lobbying pressure) ... gov. services were increasingly necessary to cover the widening gap (effectively indirect gov. subsidy for many industries).
The tens of millions in undocumented immigrants since the start of the century is far more of impact on jobs for working poor than any minimum wage issue (and slave wages helping downward pressure on other wages)
Inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
Capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
Fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
Comptroller general posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
posts mentioning AMEX, FDC, WU, undocumented workers
https://www.garlic.com/~lynn/2019e.html#155 Book on monopoly (IBM)
https://www.garlic.com/~lynn/2019e.html#144 PayPal, Western Union Named & Shamed for Overcharging the Most on Money Transfers to Mexico
https://www.garlic.com/~lynn/2019e.html#47 king sized ash tray "the good life" 1967 job ad
https://www.garlic.com/~lynn/2019d.html#84 Steve King Devised an Insane Formula to Claim Undocumented Immigrants Are Taking Over America
https://www.garlic.com/~lynn/2019d.html#74 Employers escape sanctions, while the undocumented risk lives and prosecution
https://www.garlic.com/~lynn/2018f.html#119 What Minimum-Wage Foes Got Wrong About Seattle
https://www.garlic.com/~lynn/2018.html#67 Pushing Out Immigrants Isn't About the Economy
https://www.garlic.com/~lynn/2017h.html#114 EasyLink email ad
https://www.garlic.com/~lynn/2017h.html#9 Corporate Profit and Taxes
https://www.garlic.com/~lynn/2017h.html#2 Trump is taking the wrong approach to China on tech, says ex-Reagan official who helped beat Soviets
https://www.garlic.com/~lynn/2017d.html#9 Which States Account for Our Trade Deficit with Mexico?
https://www.garlic.com/~lynn/2017c.html#77 Trump's crackdown focuses on people in the U.S. illegally - but not on the businesses that hire them
https://www.garlic.com/~lynn/2017b.html#16 Trump to sign cyber security order
https://www.garlic.com/~lynn/2016h.html#103 Minimum Wage
https://www.garlic.com/~lynn/2016f.html#49 old Western Union Telegraph Company advertising
https://www.garlic.com/~lynn/2016d.html#51 Penn Central PL/I advertising
https://www.garlic.com/~lynn/2016b.html#100 Ray Tomlinson, inventor of modern email, dies
https://www.garlic.com/~lynn/2015d.html#27 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2014m.html#162 LEO
https://www.garlic.com/~lynn/2014f.html#74 Is end of mainframe near ?
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM 4300 Date: 17 Sep, 2024 Blog: FacebookIBM 4300
After transferring to San Jose Research, I got to wander around datacenters in silicon valley, including bldgs14&15 (disk engineering & product test) across the street. They were running 7x24, pre-scheduled stand-alone (mainframe) testing, they said that they had recently tried MVS (for some concurrent test), but MVS had 15min mean-time-between failure (requiring manual re-ipl) in that environment. I offer to rewrite I/O supervisor so it was bullet proof and never failed so they could do any amount of on-demand, concurrent testing, significantly improving productivity.
Bldg15 would get early engineering models, and when 3033 (#3 or #4) arrived, it turns out testing only took percent or two of processor so we scrounge up a 3830 disk controller and string of 3330 drives and put up on own private online service ... including running 3270 cable under the street to my office in bldg28. When the engineering 4341 shows up, I'm con'ed into doing benchmark for national lab that was looking at getting 70 for compute farm (sort of the leading edge of the coming cluster supercomputing tsunami).
https://en.wikipedia.org/wiki/IBM_4300#IBM_4341
The IBM 4341 (and the 4331) were announced on 30 January 1979.[3] Like
the 4331, it came with an integrated adapter that permitted attaching
up to 16 of the newly introduced IBM 3370 DASD. The 4341 did not
support the much lower capacity IBM 3310. The 4341 Introduced the
Extended Control Program Support:VM (ECPS:VM), Extended Control
Program Support:VS1 (ECPS:VS1) and Extended Control Program
Support:Virtual Storage Extended[6][7] (ECPS:VSE) features. The 4341-2
introduced the Extended Control Program Support:MVS[8][9] (ECPS:MVS)
option, a subset of System/370 extended facility.
... snip ...
... in the early 70s, there was the Future System effort that was
completely different and was going to completely replace 370 (during
FS, internal politics was killing off 370 efforts and claim that the
lack of new 370 during FS is what gave the clone 370 makers their
market foothold). When FS eventually implodes, there is mad rush to
get stuff back into the 370 product pipelines, including kicking off
quick&dirty 3033 and 3081 efforts in parallel.
http://www.jfsowa.com/computer/memo125.htm
I get con'ed into working on 370/125 five CPU multiprocessor, 138/148 ECPS ... after 125 multiprocessor is canceled, am asked to help with high-end 370 16 CPU multiprocessor.
For 138/148 ECPS, I'm told that they have 6kbytes for microcode and
I'm to find the highest executed 370 kernel instruction paths, which
would approx. translate to microcode in same number of bytes. Old
archived post with initial analysis (highest 6kbytes account for
79.55% of kernel execute, translated to native microcode running ten
times faster), which carries over to 4331/4341:
https://www.garlic.com/~lynn/94.html#21
Endicott objects to the 5-CPU 125 because the throughput would overlap 138/148 throughput, I had to argue both sides and the 125 effort was canceled. I then get sucked into the 16-CPU effort and we con the 3033 processor engineers into working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips). Everybody thought the 16-CPU was great until somebody tells the head of POK that it could be decades before the POK favorite son operating system ("MVS") had (effective) 16-CPU support (at the time MVS docs said that MVS 2-CPU throughput was only 1.2-1.5 times throughput of single processor ... and it got worse as number of CPUs increase; note POK doesn't ship 16-CPU system until after turn of century). Head of POK then tells some of us to never visit POK again (and 3033 engineers, no-distractions, heads-down on 3033). I would sneak back into POK (until transferring out to SJR).
Endicott also tried to convince corporate to allow them to pre-install VM370 on every machine shipped (somewhat like current PR/SM-LPAR), but head of POK was in the process of convincing corporate to kill VM370, shutdown the development group and to transfer all the people to POK for MVS/XA (and got Endicott pre-install vetoed). Endicott eventually manages to save the VM370 product mission, but had to recreate a development group from scratch.
trivia: over a decade ago, I had been ask to tract down IBM decision to add virtual memory to all 370s. I eventually found assistant to executive making decision. Basically MVT storage management was so bad that region sizes had to be specified four times larger than used. As a result, a 1mbyte 370/165 would normally only run four concurrent regions, insufficient to keep 165 busy and justified. Mapping MVT to 16mbyte virtual memory (VS2/SVS, sort of like running MVT in a CP67 16mbyte virtual machine), they could increase concurrent regions by four times with little or no paging (modulo caped at 15 because 4bit storage protect keys).
trivia2: mid-80s, communication group was fiercely fighting off client/server and distributed computing, trying to block announce of mainframe TCP/IP support. When they lost, they changed strategy and said that since they had corporate strategic ownership for everything that crossed datacenter walls, it had to be released through them; what shipped got 44kbytes/sec aggregate using nearly whole 3090 CPU. I then did changes for RFC1044 support and in some tuning tests at Cray Research between Cray and IBM 4341, got sustained 4341 channel throughput using only modest amount of 4341 CPU (something like 500 times improvement in bytes moved per instruction executed).
getting to play disk engineer in 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
360/370 mcode posts
https://www.garlic.com/~lynn/submain.html#360mcode
SMP, tightly-coupled, multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
some posts referencing decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#88 Computer Virtual Memory
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024d.html#29 Future System and S/38
https://www.garlic.com/~lynn/2024b.html#108 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#107 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#58 Vintage MVS
https://www.garlic.com/~lynn/2024b.html#12 3033
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2024.html#24 Tomasulo at IBM
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023e.html#70 The IBM System/360 Revolution
https://www.garlic.com/~lynn/2023e.html#15 Copyright Software
https://www.garlic.com/~lynn/2023d.html#71 IBM System/360, 1964
https://www.garlic.com/~lynn/2023b.html#6 z/VM 50th - part 7
https://www.garlic.com/~lynn/2023b.html#0 IBM 370
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2022h.html#115 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#87 CICS (and other history)
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#83 COBOL and tricks
https://www.garlic.com/~lynn/2021h.html#70 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2019c.html#25 virtual memory
https://www.garlic.com/~lynn/2019b.html#53 S/360
https://www.garlic.com/~lynn/2019.html#26 Where's the fire? | Computerworld Shark Tank
https://www.garlic.com/~lynn/2019.html#18 IBM assembler
https://www.garlic.com/~lynn/2018c.html#23 VS History
https://www.garlic.com/~lynn/2017j.html#84 VS/Repack
https://www.garlic.com/~lynn/2017e.html#5 TSS/8, was A Whirlwind History of the Computer
https://www.garlic.com/~lynn/2011d.html#73 Multiple Virtual Memory
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Scalable Computing Date: 17 Sep, 2024 Blog: FacebookMy wife and I get the HA/6000 project, initially for the NYTimes to port their newspaper system (ATEX) from DEC VAXCluster to RS/6000. I rename it HA/CMP when we start doing technical/scientific cluster scale-up with the national labs and commercial cluster scale-up with the RDBMS vendors (Oracle, Sybase, Informix, Ingres). Then the executive we report to moves over to head up the AIM
Early Jan92, there is HA/CMP meeting with Oracle CEO where AWD/Hester
tells Ellison that we would have 16processor clusters by mid92 and
128processor clusters by ye92. Then by late Jan92, cluster scale-up is
transferred for announce as IBM supercomputer (for
technical/scientific *ONLY*) and we are told we can't work with
anything that has more than four processors (we leave IBM a few months
later).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Comparison of high-end mainframe and RIOS/POWER cluster, which
possibly contributed to performance kneecapping (commercial) HA/CMP
systems
1993: eight processor ES/9000-982 : 408MIPS, 51MIPS/processor
1993: RS6000/990 : 126MIPS; 16-system: 2016MIPs, 128-system: 16,128MIPS
By late 90s, the i86 chip vendors were developing technology with
hardware layer that translates i86 instructions into RISC micro-ops
for execution, largely negating the performance difference between
801/RISC and i86.
1999 single IBM PowerPC 440 hits 1,000MIPS
1999 single Pentium3 (translation to RISC micro-ops for execution)
hits 2,054MIPS (twice PowerPC 440)
2003 single Pentium4 processor 9.7BIPS (9,700MIPS), greater than
2003 z990
2010 E5-2600 XEON server blade, two chip, 16 processor, aggregate
500BIPS (31BIPS/processor)
All benchmarks have been number of program iterations compared to
reference platform (370/158-3), not actual instruction count. 2010
E5-2600 XEON server blade (common in large megadatacenters, large
cloud operations may have scores of megadatacenters around the world,
each with half million or more server blades, each megadatacenter
equivalent of several million mainframes); each E5-2600 ten times 2010
z196. Since 2010, server blades have increased the performance gap
with mainframes (and 2010 E5-2600 500BIPS still exceeds z16 222BIPS).
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13, 140 processors, 100BIPS (710MIPS/proc), Jan2015
z14, 170 processors, 150BIPS (862MIPS/proc), Aug2017
z15, 190 processors, 190BIPS* (1000MIPS/proc), Sep2019
• pubs say z15 1.25 times z14 (1.25*150BIPS or 190BIPS)
z16, 200 processors, 222BIPS* (1111MIPS/proc), Sep2022
• pubs say z16 1.17 times z15 (1.17*190BIPS or 222BIPS)
trivia: Back in 1988, the IBM branch office asked if I could help LLNL
(national lab) with standardizing some serial stuff they were working
with, which quickly becomes fibre-channel standard ("FCS", including
some stuff I did back in 1980, initially 1gbit/sec, full-duplex,
aggregate 200mbyte/sec). Then some POK serial stuff is released after
1990 with ES/9000 as ESCON (when it is already obsolete,
17mbytes/sec). Then some POK engineers become involved with FCS and
define a heavy-weight protocol that drastically cuts the native
throughput which is eventually released as FICON.
The latest (public) benchmark I can find is Z196 "Peak I/O" that gets 2M IOPS with 104 FICON. At the same time a FCS was announced for E5-2600 server blades claiming over millions IOPS (two such FCS have higher throughput than 104 FICON). Also IBM documents recommend keeping SAPs (system assist processors that do actual I/O) CPUs to 70% (which would be about 1.5M IOPS). Also with regard to mainframe CKD DASD throughput, none have been made for decades, all being simulated on industry standard fixed-block disks.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
ficon and "FCS" posts
https://www.garlic.com/~lynn/submisc.html#ficon
megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter
some posts mentioning i86 and risc micro-ops
https://www.garlic.com/~lynn/2024d.html#94 Mainframe Integrity
https://www.garlic.com/~lynn/2024c.html#73 Mainframe and Blade Servers
https://www.garlic.com/~lynn/2024c.html#2 ReBoot Hill Revisited
https://www.garlic.com/~lynn/2024b.html#98 IBM 360 Announce 7Apr1964
https://www.garlic.com/~lynn/2024b.html#53 Vintage Mainframe
https://www.garlic.com/~lynn/2024b.html#42 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#81 Benchmarks
https://www.garlic.com/~lynn/2024.html#46 RS/6000 Mainframe
https://www.garlic.com/~lynn/2022d.html#6 Computer Server Market
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: 3081 (/3090) TCMs and Service Processor Date: 18 Sep, 2024 Blog: FacebookTCM w/3081, two issues:
• enormous number of circuits ... using some warmed over Future System
technology (i.e. FS was to completely replace 370 and 370 efforts were
being killed off, when FS finally implodes, there is mad rush to get
stuff back into 370 product pipelines, including quick&dirty 3033&3081
in parallel). TCMs were needed in order to pack all those circuits in
reasonable space
http://www.jfsowa.com/computer/memo125.htm
• field engineering had a scoping bootstrap diagnostic/repair process. with all the circuits in TCMs, that was no longer possible, so introduced "service processor" ... the service processor could be scoped, diagnosed and "repaired/fixed" ... then the service processor had lots of probes into the TCMs which could be used to diagnose ... and possibly replace whole TCM.
trivia: come trout/3090, they were going to use 4331 running highly
modified version of VM370R6 for the service processor ... before ship,
it was upgraded to a pair of redundant 4361s (running modified
VM370R6, 3370 FBA disks, all service screens done in CMS IOS3270).
mentions 3092 (service processor):
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html
Future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
posts mentioning service processors needed for TCMs
https://www.garlic.com/~lynn/2022c.html#107 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2019b.html#80 TCM
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2017c.html#94 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#88 GREAT presentation on the history of the mainframe
https://www.garlic.com/~lynn/2017c.html#50 Mainframes after Future System
https://www.garlic.com/~lynn/2016f.html#86 3033
https://www.garlic.com/~lynn/2014e.html#14 23Jun1969 Unbundling Announcement
https://www.garlic.com/~lynn/2014.html#31 Hardware failures (was Re: Scary Sysprogs ...)
https://www.garlic.com/~lynn/2012e.html#38 A bit of IBM System 360 nostalgia
https://www.garlic.com/~lynn/2012c.html#23 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2011m.html#21 Supervisory Processors
https://www.garlic.com/~lynn/2011f.html#42 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#32 At least two decades back, some gurus predicted that mainframes would disappear
https://www.garlic.com/~lynn/2009b.html#22 Evil weather
https://www.garlic.com/~lynn/2004p.html#37 IBM 3614 and 3624 ATM's
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: Microsoft makes a lot of money, Is Intel exceptionally unsuccessful as an architecture designer? Newsgroups: comp.arch Date: Thu, 19 Sep 2024 08:20:56 -1000David Brown <david.brown@hesbynett.no> writes:
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
MS employees were commenting that customers had been buying the latest releases for the new features ... but it had reached the point where the releases they were running now had 98% of the features they wanted (and the company wasn't sure what to do next).
posts mentioning 1996 MSDC Moscone
https://www.garlic.com/~lynn/2024b.html#70 HSDT, HA/CMP, NSFNET, Internet
https://www.garlic.com/~lynn/2024b.html#32 HA/CMP
https://www.garlic.com/~lynn/2023g.html#91 Vintage Christmas Tree Exec, Email, Virus, and phone book
https://www.garlic.com/~lynn/2023g.html#9 Viruses & Exploits
https://www.garlic.com/~lynn/2023c.html#53 Conflicts with IBM Communication Group
https://www.garlic.com/~lynn/2018c.html#64 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2016d.html#79 Is it a lost cause?
https://www.garlic.com/~lynn/2016d.html#69 Open DoD's Doors To Cyber Talent, Carter Asks Congress
https://www.garlic.com/~lynn/2016b.html#106 Computers anyone?
https://www.garlic.com/~lynn/2015g.html#39 [Poll] Computing favorities
https://www.garlic.com/~lynn/2015g.html#35 [Poll] Computing favorities
https://www.garlic.com/~lynn/2015e.html#35 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2015c.html#87 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2013.html#45 New HD
https://www.garlic.com/~lynn/2012j.html#97 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012j.html#93 Gordon Crovitz: Who Really Invented the Internet?
https://www.garlic.com/~lynn/2012g.html#2 What are the implication of the ongoing cyber attacks on critical infrastructure
https://www.garlic.com/~lynn/2010g.html#66 What is the protocal for GMT offset in SMTP (e-mail) header
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Upside Down and Backwards. The Missteps of Dismissing John Boyd Call for Deeper Understanding Date: 19 Sep, 2024 Blog: FacebookUpside Down and Backwards. The Missteps of Dismissing John Boyd Call for Deeper Understanding
I was introduced to John Boyd in the early 80s and would sponsor his
briefings at IBM. First time I tried to do it through plant site
employee education and at first they agreed. However, as I provided
more information about prevailing in competitive situations they
changed their mind. They said that IBM spends lots of money on
educating management on the handling of employees and they felt it
would be counter productive to expose general employees to Boyd; I
should restrict the audience just to senior members of competitive
analysis departments. Instead, first time was in IBM San Jose Research
auditorium, open to all (although it was when I first discovered I
needed cookie guards for the refreshments during breaks, non-attendees
would wander by clearing everything off the tables, totally ignoring
signs). Related:
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
Boyd posts & urls
https://www.garlic.com/~lynn/subboyd.html
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: F16 "Directed Appropriations" Date: 20 Sep, 2024 Blog: FacebookI was introduced to John Boyd in the early 80s and would sponsor his briefings at IBM. F15 started out with swing-wing followon to F111, Boyd redid the design, eliminating the swing-wing, cutting weight nearly in half, showing that the benefits of the swing-wing were more than offset by its weight. He then was responsible for the YF16 and YF17 (which became the F16 & F18). He then was involved in the F20/tigershark ... much simpler, much lower maintenance hrs, much higher flying hrs per maintenance hours and much cheaper ... for the export market. The F16 forces then came in and lobbied congress for "directed appropriations" (could only be spent on F16s) USAID for the F20 candidate countries (candidate countries claimed that F20 was much more appropriate for their needs, but they could get F16s essentially for "free").
F16 forces also lambasted Boyd as a Luddite for criticizing the introduction of the F16 "heads-up" display ... initially scrolling digital numbers (claiming that it was distraction, took more brain power and elapsed time to translate the scrolling numbers into meaning). Note: in the 50s when he was instructor at USAF weapons school, he was known as "40sec Boyd", taking on all challengers and beating them within 40sec. Ask why "40sec" when he always won within 20sec, he replied that there might be somebody in the world almost as good as he was and he would need the extra time.
YF16 with relaxed stability requiring "fly-by-wire" (computer) that
was fast enough for managing flight control surfaces
https://en.wikipedia.org/wiki/General_Dynamics_F-16_Fighting_Falcon#Relaxed_stability_and_fly-by-wire
https://en.wikipedia.org/wiki/Relaxed_stability
https://fightson.net/150/general-dynamics-f-16-fighting-falcon/
The F-16 is the first production fighter aircraft intentionally
designed to be slightly aerodynamically unstable, also known as
"relaxed static stability" (RSS), to improve manoeuvrability. Most
aircraft are designed with positive static stability, which induces
aircraft to return to straight and level flight attitude if the pilot
releases the controls; this reduces manoevrability as the inherent
stability has to be overcome. Aircraft with negative stability are
designed to deviate from controlled flight and thus be more
maneuverable. At supersonic speeds the F-16 gains stability
(eventually positive) due to aerodynamic changes.
... snip ...
Boyd posts & urls
https://www.garlic.com/~lynn/subboyd.html
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: F16 "Directed Appropriations" Date: 20 Sep, 2024 Blog: Facebookre:
1991 F16/F20 Usenet ... not exactly same as I remember (USAID
"directed appropriation" lobbying) details
https://groups.google.com/g/sci.military/c/TLBfTeeXTv0/m/5zJ1L_vcwWUJ
minor F16 & USENET connection
https://www.usenix.org/system/files/login/articles/login_aug15_09_salus.pdf
UUCP was taken up widely and this led to a need for improvements. The
next version was written by Lesk and Dave Nowitz, with contributions
by Greg Chesson, and appeared in Seventh Edition UNIX in October 1978.
... snip ...
2nd half of 80s, I was on Greg Chesson XTP TAB (lots of opposition for this and other stuff, from the company communication products group) and there were some MIC companies involved ... including use for F16 simulator. Because of MIC activity, it was taken to ISO chartered ANSI X3S3.3 as HSP for standardization. Eventually HSP was rejected by X3S3.3 because they said ISO requirements that only standards for protocols that conform to OSI model (XTP/HSP violated model because 1) supported internetwork protocol that doesn't exist in OSI, 2) bypassed transport/network interface, 3) went directly from transport to LAN MAC, which doesn't existing in OSI, sitting somewhere in middle of network/layer3).
Boyd posts & urls
https://www.garlic.com/~lynn/subboyd.html
XTP/HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
post referencing IBM making sure OSI stays aligned with SNA:
https://www.garlic.com/~lynn/2024c.html#57 IBM Mainframe, TCP/IP, Token-ring, Ethernet
some other recent posts mentioning OSI
https://www.garlic.com/~lynn/2024e.html#49 OSI and XTP
https://www.garlic.com/~lynn/2024d.html#73 GOSIP
https://www.garlic.com/~lynn/2024d.html#7 TCP/IP Protocol
https://www.garlic.com/~lynn/2024c.html#92 TCP Joke
https://www.garlic.com/~lynn/2024b.html#99 OSI: The Internet That Wasn't
https://www.garlic.com/~lynn/2024b.html#54 Vintage Mainframe
https://www.garlic.com/~lynn/2024.html#84 SNA/VTAM
https://www.garlic.com/~lynn/2023f.html#82 Vintage Mainframe OSI
https://www.garlic.com/~lynn/2023d.html#56 How the Net Was Won
https://www.garlic.com/~lynn/2023.html#104 XTP, OSI & LAN/MAC
https://www.garlic.com/~lynn/2022g.html#49 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022f.html#20 IETF TCP/IP versus ISO OSI
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: HASP, JES2, NJE, VNET/RSCS Date: 21 Sep, 2024 Blog: FacebookTook two credit hr intro to fortran/computers, univ had 709/1401 but was getting 360/67 (for tss/360). At the end of the semester was hired to rewrite 1401 MPIO in 360 assembler (MPIO unit record front-end for 709, univ. got a 360/30 temporarily replacing 1401 pending arrival of 360/67). The univ shutdown the datacenter on weekends and I would have the whole place dedicated (although 48hrs w/o sleep made monday classes hard). I was given a whole bunch of software and hardware manuals and got to design my own monitor, device drivers, interrupt handlers, error recovery, storage management, etc. Then within a year of taking intro class, the 360/67 came in and I was hired fulltime responsible for os/360 (tss/360 never really came to production, so ran as os/360). Student fortran jobs ran in less than a second on 709, but initially over a minute with os/360. I install HASP (MFT9.5) which cuts the time in half. I then start redoing STAGE2 SYSGEN (MFT11), carefully placing datasets and PDS members to optimize arm seek and multi-track search, cutting another 2/3rds to 12.9secs. Student fortran never ran faster than 709 until I install Univ. of Waterloo WATFOR (MVT appears w/R12, but I continue gen'ing MFT until R15/16). For MVT18 & HASP, I delete the 2780/RJE code from HASP (to reduce memory requirements) and implement 2741&TTY terminal support and editor (w/CMS edit syntax) for CRJE.
Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit). I think Renton datacenter may be largest in the world (couple hundred million in 360 stuff), 360/65s arriving faster than they could be installed, boxes constantly staged in the hallways around the machine room. Also the Boeing Huntsville 2-CPU 360/67 is moved up to Seattle. They had got it similar to univ, but ran it as two 360/65s. It had come with several 2250 for CAD/CAM ... and they ran into the MVT storage management problem and modified MVT13 to run in virtual memory (but no paging) to partially alleviate the problems.
Note: a little over a decade ago I was asked to track down the decision to add virtual memory to all 370s and found the staff member to executive making the decision. Basically MVT storage management was so bad, that normally region sizes had to be specified four times larger than used, as a result a 1mbyte 370/165 only ran four concurrent region, insufficient to keep the 165 busy and justified. Initially running in 16mbyte virtual address space (VS2/SVS), could increase number of regions by factor of four times (caped to 15 for 4bit storage protect keys) ... sort of like running MVT in CP67 16mbyte virtual machine ... with little or no paging. And then to get past the storage protect limit, moved to VS2/MVS with each region in a different virtual address space.
MVS trivia (MVS song sung at SHARE HASP sing-along) ... I was there
for 1st performance
http://www.mxg.com/thebuttonman/boney.asp
Trivia: the IBM internal corporate network started with CP/67 Science
Center network ... from one of the science center members (that
invented GML in 1969):
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for
the project, something I was allowed to do part-time because of my
knowledge of the user requirements. My real job was to encourage the
staffs of the various scientific centers to make use of the
CP-67-based Wide Area Network that was centered in Cambridge.
... snip ...
HASP/JES
https://en.wikipedia.org/wiki/Houston_Automatic_Spooling_Priority
NJE originally had "TUCC" in col68-71 of the source cards (for univ. where it originated). Early 70s internal batch OS360/HASP (and follow-on batch JES2) installations wanted to connect into the CP/67 wide-area network. An issue was that HASP version was completely tied to batch OS/360 ... while VNET/RSCS had a clean layered implementation. As a result a "clean" VNET/RSCS device driver was done that simulated the HASP network implementation allowing the batch operating systems to be connected into the growing internal network.
However the HASP (and later JES2) implementations had other issues including 1) defined network nodes in spare entries in the 256 entry pseudo device table (maybe 160-180, while the internal network was already past 256 nodes) and would trash traffic where origin or destination node weren't in the local table and 2) since network fields were intermixed with job control fields ... traffic originating from HASP/JES2 system at a slightly different release/version from destination HASP/JES2, could result in crashing the destination host operating system.
In the 1st case, HASP/JES2 systems were restricted to isolated boundary/edge nodes. In the 2nd case, a body of VNET/RSCS simulated HASP/JES2 drivers evolved that could recognize origin and destination HASP/JES2 systems were different release/version and re-organize header fields to correspond to specific version/release level of a directly connected destination HASP/JES2 system (there was an infamous case of local JES2 mods in San Jose, Ca were crashing JES2 systems in Hursley, England and the intermediate VNET/RSCS system was blamed because their drivers hadn't been updated to handle the local changes in San Jose). Another case was the west coast STL (now SVL) lab and Hursley installed double-hop satellite link (wanting to use each other systems off-shift) ... it worked fine with VNET/RSCS but wouldn't work at all with JES2 (because their telecommunication layer had a fixed round-trip max delay ... and double hop round trip exceeded the limit).
Person responsible for the VNET/RSCS implementation (passed Aug2020)
https://en.wikipedia.org/wiki/Edson_Hendricks
In June 1975, MIT Professor Jerry Saltzer accompanied Hendricks to
DARPA, where Hendricks described his innovations to the principal
scientist, Dr. Vinton Cerf. Later that year in September 15-19 of 75,
Cerf and Hendricks were the only two delegates from the United States,
to attend a workshop on Data Communications at the International
Institute for Applied Systems Analysis, 2361 Laxenburg Austria where
again, Hendricks spoke publicly about his innovative design which
paved the way to the Internet as we know it today.
... snip ...
The VNET/RSCS technology was also used for the corporate sponsored
univ BITNET
https://en.wikipedia.org/wiki/BITNET
With the conversion of ARPANET (with IMPs) to internetworking protocol (and "Internet") on 1Jan1983, there were approx. 100 IMPs and 255 hosts while the internal network was rapidly approaching 1000.
In the early 80s, I got HSDT project, T1 and faster computer links (both satellite and terrestrial) ... but lots of conflict with the communication products group. Note IBM had 2701 controller in the 60s that supported T1 ... but apparently with the transition to SNA/VTAM in the 70s, various issues caped controllers to 56kbit/sec links.
HASP, ASP, JES2, JES3, NJE posts
https://www.garlic.com/~lynn/submain.html#hasp
IBM cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
some posts mentioning MPIO, Boeing CFO, Renton, and Huntsville
https://www.garlic.com/~lynn/2024e.html#24 Public Facebook Mainframe Group
https://www.garlic.com/~lynn/2024d.html#63 360/65, 360/67, 360/75 750ns memory
https://www.garlic.com/~lynn/2024b.html#49 Vintage 2250
https://www.garlic.com/~lynn/2024.html#87 IBM 360
https://www.garlic.com/~lynn/2023g.html#39 Vintage Mainframe
https://www.garlic.com/~lynn/2023g.html#19 OS/360 Bloat
https://www.garlic.com/~lynn/2023e.html#34 IBM 360/67
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM - Making The World Work Better Date: 22 Sep, 2024 Blog: Facebookre:
... the damage had been done in the two decades between learson and gerstner ... and company was being prepared for breakup when they brought in gerstner to (somewhat) reverse the breakup ... but it was difficult time trying to save a company that was on the verge of going under ... IBM somewhat barely surviving as financial engineering company
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
former amex president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
some posts mentioning gerstner, financial engineering, stock buybacks,
and manipulating mainframe earnings
https://www.garlic.com/~lynn/2024e.html#51 Former AMEX President and New IBM CEO
https://www.garlic.com/~lynn/2024.html#120 The Greatest Capitalist Who Ever Lived
https://www.garlic.com/~lynn/2023c.html#72 Father, Son & CO
https://www.garlic.com/~lynn/2023b.html#74 IBM Breakup
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM - Making The World Work Better Date: 23 Sep, 2024 Blog: Facebookre:
before getting HA/6000 approved ... which I later rename HA/CMP
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
my wife was con'ed into co-author for response to gov. agency RFI ... that involved large campus-like, super-secure network ... where she introduced 3-tier architecture (the communication group was fiercely fighting off 2-tier, client/server and distributed computing). Then during HA/CMP, we were were also out making customer executive presentations on 3-tier, ethernet, TCP/IP, high-performance routers, etc ... and taking all sorts of arrows in the back with misinformation generated by the communication group, token-ring, and SAA forces. The guy running SAA had a large, top-floor, corner office in Somers (and also had been the person back in the mid-70s that had con'ed me into helping Endicott with 138/148 ECPS). We would periodically drop by his office to complain about how badly his people were behaving.
IBM had been marketing a fault-tolerant product with IBM logo, S/88. The S/88 product administrator had taken to bringing us around to their customers and had also got me to write a section for the corporate continuous available strategy document ... but it got pulled when both Rochester (AS/400) and POK (mainframe) complained that they couldn't meet the requirements.
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
3-tier architecture posts
https://www.garlic.com/~lynn/subnetwork.html#3tier
continuous availability, disaster survivability, geographic
survivability posts
https://www.garlic.com/~lynn/submain.html#available
some other posts mentioning SAA, Somers, and 3-tier
https://www.garlic.com/~lynn/2023c.html#84 Dataprocessing Career
https://www.garlic.com/~lynn/2023b.html#50 Ethernet (& CAT5)
https://www.garlic.com/~lynn/2022h.html#97 IBM 360
https://www.garlic.com/~lynn/2021c.html#85 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2017h.html#21 IBM ... the rise and fall
https://www.garlic.com/~lynn/2014l.html#40 Could this be the wrongest prediction of all time?
https://www.garlic.com/~lynn/2014d.html#41 World Wide Web turns 25 years old
https://www.garlic.com/~lynn/2014b.html#44 Resistance to Java
https://www.garlic.com/~lynn/2010o.html#4 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2007d.html#43 Is computer history taugh now?
https://www.garlic.com/~lynn/2006c.html#11 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2004d.html#62 microsoft antitrust
https://www.garlic.com/~lynn/2003p.html#43 Mainframe Emulation Solutions
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: RPG Game Master's Guide Date: 23 Sep, 2024 Blog: FacebookI would periodically drop in on Tymshare and/or see them at the monthly user group meetings hosted by Stanford SLAC. In Aug1976, TYMSHARE start offering their VM370/CMS based online computer conferencing system to the (IBM mainframe) SHARE user group as "VMSHARE" ... archives here
I cut a deal with TYMSHARE for monthly tape dump of all VMSHARE (and later also PCSHARE) files for putting up on internal network and systems. One visit to TYMSHARE they demo'ed a new game (ADVENTURE) that somebody found on Stanford SAIL PDP10 system and ported to VM370/CMS ... I get copy and started making it (also) available on internal networks/systems. I would send source to anybody that could demonstrate they got all the points. Relatively shortly, versions with lots more points appear as well as PLI versions.
Colossal Cave Adventure
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure
Adventure Game
https://en.wikipedia.org/wiki/Adventure_game
recent posts mentioning TYMSHARE, VMSHARE, adventure
https://www.garlic.com/~lynn/2024e.html#20 TYMSHARE, ADVENTURE/games
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2023f.html#116 Computer Games
https://www.garlic.com/~lynn/2023f.html#60 The Many Ways To Play Colossal Cave Adventure After Nearly Half A Century
https://www.garlic.com/~lynn/2023f.html#7 Video terminals
https://www.garlic.com/~lynn/2023e.html#9 Tymshare
https://www.garlic.com/~lynn/2023d.html#115 ADVENTURE
https://www.garlic.com/~lynn/2023c.html#14 Adventure
https://www.garlic.com/~lynn/2023b.html#86 Online systems fostering online communication
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: HASP, JES2, NJE, VNET/RSCS Date: 24 Sep, 2024 Blog: Facebookre:
little over decade ago, I was asked if I could track down the decision
to add virtual memory to all 370s, I found staff member to executive
making decision, basically MVT storage management was so bad that
region sizes had to be specified four times larger than used resulting
in 1mbyte 370/165 typically ran only four concurrent regions,
insufficient to keep 165 busy and justified. Moving MVT to 16mbyte
virtual address space (VS2/SVS) allowed number of concurrent regions
to be increased by factor of four (caped at 15 because of 4bit storage
protect keys) with little or no paging (sort of like running MVT in a
CP67 16mbyte virtual machine). Old archived post with some of the
email exchange (including discussion of early "spool" history).
https://www.garlic.com/~lynn/2011d.html#73
later was enhanced for 16mbyte virtual address space for each region (VS2/MVS), work around to the storage protect key limitation. Also mentions Simpson (one of people that created HASP) had modified MFT-II with virtual memory and virtual memory paged filesystem ("RASP") ... which wasn't used, he left IBM for Amdahl where he recreated RASP from scratch.
following post mentions my wife had been in the Gburg JES group and
one of the catchers for ASP/JES3 ... then was con'ed into going to POK
responsible for loosely-coupled (shared DASD) architecture; she didn't
remain long 1) periodic battles with the communication group trying to
force her into using SNA/VTAM for loosely-coupled operation and 2)
little uptake (until much later with SYSPLEX and Parallel SYSPLEX)
except for IMS hot-standby (she has story of asking Vern Watts who he
would ask for permission to do hot-standby and he says "nobody", he
would just do it and tell them when it was all done)
https://www.garlic.com/~lynn/2011d.html#74
HASP, ASP, JES2, JES3, NJE posts
https://www.garlic.com/~lynn/submain.html#hasp
peer-coupled shared data architecture posts
https://www.garlic.com/~lynn/submain.html#shareddata
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: IBM Basic Beliefs Date: 24 Sep, 2024 Blog: FacebookIBM Basic Beliefs
1972, Learson failed in blocking the bureaucrats, careerists, and MBAs
from destroying Watson culture/legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
The damage was done in the 20yrs between Learson failed effort and
1992 when IBM has one of the largest losses in the history of US
companies and was being reorged into the 13 "baby blues" (take off on
AT&T "baby bells" breakup decade earlier) in preparation for breaking
up the company
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html
we had already left IBM but get a call from the bowels of Armonk
asking if we could help with the company breakup. Before we get
started, the board brings in the former president of Amex as CEO, who
(somewhat) reverses the breakup ... and uses some of the techniques
used at RJR (ref gone 404, but lives on at wayback machine).
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
but it was difficult time saving a company that was on the verge of
going under ... IBM somewhat barely surviving as financial engineering
company
https://www.garlic.com/~lynn/2024e.html#137 IBM - Making The World Work Better
https://www.garlic.com/~lynn/2024e.html#138 IBM - Making The World Work Better
"Future System" first half of the 70s greatly accelerated the downword
spiral, "Computer Wars: The Post-IBM World", Time Books
https://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr
of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE
NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM
lived in the shadow of defeat ... But because of the heavy investment
of face by the top management, F/S took years to kill, although its
wrong headedness was obvious from the very outset. "For the first
time, during F/S, outspoken criticism became politically dangerous,"
recalls a former top executive
... snip ...
more detail:
http://www.jfsowa.com/computer/memo125.htm
IBM downturn/downfall/breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
former amex president posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension plan posts
https://www.garlic.com/~lynn/submisc.html#pension
stock buyback posts
https://www.garlic.com/~lynn/submisc.html#stock.buyback
private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: comp.os.linux.misc, alt.folklore.computers Date: Tue, 24 Sep 2024 16:05:25 -1000Lawrence D'Oliveiro <ldo@nz.invalid> writes:
MULTICS
https://en.wikipedia.org/wiki/Multics
https://www.multicians.org/history.html
https://web.mit.edu/Saltzer/www/publications/f7y/f7y.html
https://www.tebatt.net/SAT/COGITATIONS/UPcursorLecture/ProjectMAC.html
something of spinoff, Stratus
https://en.wikipedia.org/wiki/Stratus_VOS
Stratus
https://en.wikipedia.org/wiki/Stratus_VOS#Overview
VOS was coded mainly in PL/I with a small amount of assembly language
before it was migrated to ftServer series.[citation needed] As of 1991,
the system was written in PL/I and C, with only 3% in assembly.[10]
... snip ...
topic drift trivia (I was at CSC for much of the 70s): some of the MIT
CTSS/7094 went to Project MAC on the 5th flr to do MULTICS, others went
to the IBM Cambridge Scientific Center on the 4th flr to do virtual
machines, networking, online&performance applications, etc.
https://en.wikipedia.org/wiki/History_of_CP/CMS
https://en.wikipedia.org/wiki/Cambridge_Scientific_Center
CSC had wanted 360/50 to modify adding virtual memory, but all the spare
360/50s were going to FAA ATC, so they had to settle for 360/40 to
modify and implemented virtual machine CP40/CMS ... then when 360/67
becomes available standard with virtual memory, CP40/CMS morphs into
CP67/CMS ... precusor to VM370/CMS
https://en.wikipedia.org/wiki/VM_(operating_system)
CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
had been rewritten as "SCRIPT" for CMS. Then in 1969 when three people
at the science center invented GML, GML tag processing was added to
SCRIPT.
https://en.wikipedia.org/wiki/IBM_Generalized_Markup_Language
account by one of the GML inventors about the CP67 wide-area network
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
Actually, the law office application was the original motivation for the
project, something I was allowed to do part-time because of my knowledge
of the user requirements. My real job was to encourage the staffs of the
various scientific centers to make use of the CP-67-based Wide Area
Network that was centered in Cambridge.
... snip ...
Person responsible for CP67 wide-area network:
https://en.wikipedia.org/wiki/Edson_Hendricks
which morphs into the corporate internal network (larger than
arpanet/internet from just about the beginning until sometime mid/late
80s)
technology also used for the corporate sponsored univ bitnet
https://en.wikipedia.org/wiki/BITNET
CSC posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: alt.folklore.computers, comp.os.linux.misc Date: Wed, 25 Sep 2024 07:13:56 -1000Charlie Gibbs <cgibbs@kltpzyxm.invalid> writes:
trivia: 360s were originally suppose to be ascii machines, but the ASCII
unit record machines weren't ready so they were going to (temporarily)
use BCD gear and EBCDIC. biggest computer goof ever:
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
I used to drop by TYMSHARE and/or see them at monthly meetings hosted at
STANFORD SLAC. They had made their CMS-based online computer
conferencing system available "free" to SHARE in Aug1976 as VMSHARE
archvies:
http://vm.marist.edu/~vmshare
I cut a deal to get monthly tape dump of all VMSHARE (and later PCSHARE) files to make available on IBM internal network and systems (one difficulty was lawyers who were concerned that internal employees would be contaminated exposed to direct/unfiltered customer information.
one visit to TYMSHARE they demo'ed a game (ADVENTURE) that somebody found on Stanford SAIL PDP10 and ported to CMS. I got full full source and made it (also) available on on internal network and systems. I would send source to anybody that could demonstrate they got all the points. Relatively shortly, versions with lots more points appear as well as PLI versions.
Colossal Cave Adventure
https://en.wikipedia.org/wiki/Colossal_Cave_Adventure
Adventure Game
https://en.wikipedia.org/wiki/Adventure_game
recent posts mentioning bob bemer and/or tymshare/adventure
https://www.garlic.com/~lynn/2024e.html#139 RPG Game Master's Guide
https://www.garlic.com/~lynn/2024e.html#97 COBOL history, Article on new mainframe use
https://www.garlic.com/~lynn/2024e.html#20 TYMSHARE, ADVENTURE/games
https://www.garlic.com/~lynn/2024e.html#2 DASD CKD
https://www.garlic.com/~lynn/2024d.html#107 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#105 Biggest Computer Goof Ever
https://www.garlic.com/~lynn/2024d.html#103 IBM 360/40, 360/50, 360/65, 360/67, 360/75
https://www.garlic.com/~lynn/2024d.html#74 Some Email History
https://www.garlic.com/~lynn/2024c.html#120 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024c.html#93 ASCII/TTY33 Support
https://www.garlic.com/~lynn/2024c.html#53 IBM 3705 & 3725
https://www.garlic.com/~lynn/2024c.html#43 TYMSHARE, VMSHARE, ADVENTURE
https://www.garlic.com/~lynn/2024c.html#25 Tymshare & Ann Hardy
https://www.garlic.com/~lynn/2024c.html#14 Bemer, ASCII, Brooks and Mythical Man Month
https://www.garlic.com/~lynn/2024b.html#113 EBCDIC
https://www.garlic.com/~lynn/2024b.html#81 rusty iron why ``folklore''?
https://www.garlic.com/~lynn/2024.html#102 EBCDIC Card Punch Format
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: comp.os.linux.misc, alt.folklore.computers Date: Wed, 25 Sep 2024 07:17:39 -1000rbowman <bowman@montana.com> writes:
before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, kildall worked on IBM cp/67-cms at npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School
(virtual machine) CP67 (precursor to vm370)
https://en.wikipedia.org/wiki/CP-67
other (virtual machine) history
https://www.leeandmelindavarian.com/Melinda#VMHist
csc posts
https://www.garlic.com/~lynn/subtopic.html#545tech
--
virtualization experience starting Jan1968, online at home since Mar1970
From: Lynn Wheeler <lynn@garlic.com> Subject: Re: The joy of FORTRAN Newsgroups: comp.os.linux.misc, alt.folklore.computers Date: Wed, 25 Sep 2024 07:31:32 -1000scott@slp53.sl.home (Scott Lurndal) writes:
I was at San Jose Research, but doing some amount of work out at Los Gatos lab and they let me have part of a wing with offices and lab. They were doing lots of work with "TWS", from Metaware (in santa cruz) ... and had implemented 370 Pascal which they used for developing VLSI tools. It was eventually released to customers as VS/Pascal.
I used it to rewrite VM370 spool running in virtual address space and some number of other VM370 features.
In the early 90s, IBM was going through its troubles and selling off and/or offloading lots of stuff (real estate, divisions, etc), including lots of VLSI tools to industry VLSI tools vendor. However, the standard VLSI shop was SUN machines and so everything had to be ported to SUN.
I had left IBM, but got a contract from Los Gatos lab to port a 50,000 statement VS/Pascal VLSI design tool to SUN. Ran into all sorts of problems, it was easy to drop by SUN up the road, but they had outsourced SUN pascal to a organization on the opposite of the world, so anything required at least a day's turn around. In retrospect, SUN pascal seemed to have been used for little else than academic instruction ... and it would have been easier to have rewritten the whole thing in C.
some posts mentioning Los Gatos lab, metaware/tws, pascal
https://www.garlic.com/~lynn/2024c.html#114 Disconnect Between Coursework And Real-World Computers
https://www.garlic.com/~lynn/2024.html#70 IBM AIX
https://www.garlic.com/~lynn/2024.html#8 Niklaus Wirth 15feb1934 - 1jan2024
https://www.garlic.com/~lynn/2024.html#3 How IBM Stumbled onto RISC
https://www.garlic.com/~lynn/2023g.html#68 Assembler & non-Assembler For System Programming
https://www.garlic.com/~lynn/2023c.html#98 Fortran
https://www.garlic.com/~lynn/2022h.html#40 Mainframe Development Language
https://www.garlic.com/~lynn/2022g.html#6 "In Defense of ALGOL"
https://www.garlic.com/~lynn/2022f.html#13 COBOL and tricks
https://www.garlic.com/~lynn/2022d.html#82 ROMP
https://www.garlic.com/~lynn/2021j.html#23 Programming Languages in IBM
https://www.garlic.com/~lynn/2021i.html#45 not a 360 either, was Design a better 16 or 32 bit processor
https://www.garlic.com/~lynn/2021g.html#31 IBM Programming Projects
https://www.garlic.com/~lynn/2021d.html#5 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#95 What's Fortran?!?!
https://www.garlic.com/~lynn/2021.html#37 IBM HA/CMP Product
https://www.garlic.com/~lynn/2017j.html#18 The Windows 95 chime was created on a Mac
https://www.garlic.com/~lynn/2017g.html#43 The most important invention from every state
https://www.garlic.com/~lynn/2017f.html#94 Jean Sammet, Co-Designer of a Pioneering Computer Language, Dies at 89
https://www.garlic.com/~lynn/2017e.html#24 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016c.html#62 Which Books Can You Recommend For Learning Computer Programming?
https://www.garlic.com/~lynn/2015g.html#52 [Poll] Computing favorities
https://www.garlic.com/~lynn/2015g.html#51 [Poll] Computing favorities
https://www.garlic.com/~lynn/2013m.html#36 Quote on Slashdot.org
https://www.garlic.com/~lynn/2013l.html#59 Teletypewriter Model 33
https://www.garlic.com/~lynn/2011m.html#32 computer bootlaces
https://www.garlic.com/~lynn/2011i.html#69 Making Z/OS easier - Effectively replacing JCL with Unix like commands
https://www.garlic.com/~lynn/2010n.html#54 PL/I vs. Pascal
https://www.garlic.com/~lynn/2010c.html#29 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2009l.html#36 Old-school programming techniques you probably don't miss
https://www.garlic.com/~lynn/2009e.html#11 Lack of bit field instructions in x86 instruction set because of ?patents ?
https://www.garlic.com/~lynn/2008j.html#77 CLIs and GUIs
https://www.garlic.com/~lynn/2007j.html#14 Newbie question on table design
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#14 something like a CTC on a PC
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2002q.html#19 Beyond 8+3
--
virtualization experience starting Jan1968, online at home since Mar1970
--
previous, next, index - home