List of Archived Posts

2023 Newsgroup Postings (01/01 - 02/23)

AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
IMS & DB2
big and little, Can BCD and binary multipliers share circuitry?
AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
Mainrame Channel Redrive
1403 printer
Mainrame Channel Redrive
Leopards Eat Kevin McCarthy's Face
Ponzi Hospitals and Counterfeit Capitalism
Riddle solved: Why was Roman concrete so durable?
History Is Un-American. Real Americans Create Their Own Futures
IBM Loses Top Patent Spot After Decades as Leader
IBM Marketing, Sales, Branch Offices
NCSS and Dun & Bradstreet
360 Announce and then the Future System Disaster
IBM Marketing, Sales, Branch Offices
INTEROP 88 Santa Clara
Gangsters of Capitalism
PROFS trivia
Intel's Core i9-13900KS breaks the 6GHz barrier
IBM Change
IBM Change
IBM Punch Cards
Health Care in Crisis: Warning! US Capitalism is Lethal
IBM Punch Cards
IBM Punch Cards
IBM Punch Cards
IBM Punch Cards
IBM Punch Cards
Medicare Begins to Rein In Drug Costs for Older Americans
IBM Change
IBM Change
IBM Change
IBM Punch Cards
IBM Punch Cards
Revealed: Exxon Made "Breathtakingly" Accurate Climate Predictions in 1970's and 80's
IBM changes between 1968 and 1989
Adventure Game
Disk optimization
IBM AIX
IBM AIX
IBM 3081 TCM
IBM AIX
IBM changes between 1968 and 1989
Adventure Game
IBM 3081 TCM
MTS & IBM 360/67
370/125 and MVCL instruction
MTS & IBM 360/67
23Jun1969 Unbundling and Online IBM Branch Offices
370 Virtual Memory Decision
IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
Adventure Game
Classified Material and Security
z/VM 50th - Part 6, long winded zm story (before z/vm)
Classified Material and Security
Almost IBM class student
Almost IBM class student
Classified Material and Security
Boyd & IBM "Wild Duck" Discussion
Software Process
IBM (FE) Retain
Boeing to deliver last 747, the plane that democratized flying
Boeing to deliver last 747, the plane that democratized flying
7090/7044 Direct Couple
Boeing to deliver last 747, the plane that democratized flying
IBM "Green Card"
IBM and OSS
IBM and OSS
GML, SGML, & HTML
IBM 4341
IBM 4341
IBM 4341
IBM 4341
The Pentagon Saw a Warship Boondoggle
IBM 4341
IBM/PC and Microchannel
The Progenitor of Inequalities - Corporate Personhood vs. Human Beings
The Enormous Limitations of U.S. Liberal Democracy and Its Consequences: The Growth Of Fascism
ASCII/TTY Terminal Support
IBM 4341
Memories of Mosaic
Memories of Mosaic
Memories of Mosaic
Memories of Mosaic
IBM San Jose
IBM San Jose
Northern Va. is the heart of the internet. Not everyone is happy about that
IBM San Jose
Performance Predictor, IBM downfall, and new CEO
IBM 4341
IBM 4341
IBM 4341
The Ladder of Incompetence: 5 Reasons We Promote the Wrong People
IBM San Jose
Mainframe Assembler
Online Computer Conferencing
'Pay your fair share': Biden's talk on taxes echoes our findings The State of the Union speech hammered on tax inequality
Online Computer Conferencing
IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
IBM ROLM
IBM ROLM
IBM ROLM
XTP, OSI & LAN/MAC
IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
Survival of the Richest
Anti-Union Capitalism Is Wrecking America
IBM CICS
Early Webservers
If Nothing Changes, Nothing Changes
'Oligarchs run Russia. But guess what? They run the US as well'
Early Webservers
After the storm, hopefully
After the storm, hopefully
Years Before East Palestine Disaster, Congressional Allies of the Rail Industry Intervened to Block Safety Regulations
The Bunker: Tarnished Silver Bullets
IBM 5100
Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan

AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
Date: 01 Jan, 2023
Blog: Facebook
Late 80s, ran across this in austin
AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY

Administratium experts from around the company, while searching piles of red-tape in and around Austin, recently uncovered great quantities of Heavy Red-Tape. While there have been prior findings of Heavy Red-Tape at other red-tape sites, it only occurred in minute quantities. The quantities of Heavy Red-Tape, in and around Austin have allowed Administratium experts to isolate what they believe to be a new element that they are tentatively calling AUSMINIUM.

At this time, plant officials are preparing an official press release declaring that there is no cause for alarm and absolutely NO truth to the rumors that because of the great concentration of Heavy Red-Tape in the area that there is imminent danger of achieving critical mass and the whole area collapsing into a black hole. Plant officials are stating that there is no evidence that large quantities of Heavy Red-Tape can lead to the spontaneous formation of a black-hole. They point to the lack of any scientific studies unequivalently showing that there are any existing black-holes composed of Heavy Red-Tape. The exact properties of Heavy Red-Tape and ausminium are still under study.


... snip ...

AWD (workstation) supposedly was an IBU (independent business unit) free from standard IBM red-tape ... however, every time ran into bureaucrat ... they would (effectively) say that while AWD may be free from "other" IBM red-tape ... AWD wasn't free of their red-tape. Reference to Learson trying to counter the bureaucrats and careerists destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

AWD, 801/risc, iliad, romp, rios, pc/rt, rs6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

past AUSMINIUM posts
https://www.garlic.com/~lynn/2021.html#64 SCIENTIST DISCOVERS NEW ELEMENT - ADMINISTRATIUM
https://www.garlic.com/~lynn/2004b.html#29 The SOB that helped IT jobs move to India is dead!

other recent "wild duck", mostly technology
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
mostly business
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/

--
virtualization experience starting Jan1968, online at home since Mar1970

IMS & DB2

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IMS & DB2
Date: 01 Jan, 2023
Blog: Facebook
The original sql/relational implementation was system/r in the 70s (at San Jose Research on VM/145). The IMS group were claiming that System/R required twice the disk space (for the index) and lots more disk I/O (possibly 4-5 times, reading the index). The response was that (especially for any changes), IMS requires significant care&feeding with more people skills and support. This was offset in the 80s for RDBMS with significant reduction in disk $$/mbyte and increase in computer memory sizes being able to cache heaviest used indexes (overall, computer systems were getting much cheaper and IMS higher people skills/costs made it easier for RDBMS to move into this market). IBM was focused on the IMS follow-on "EAGLE" and we were able to do technology transfer to Endicott ("under the radar") for SQL/DS. Then EAGLE implodes and there was request for how fast could System/R be ported to MVS, which is eventually released as DB2, originally for "decision support" only.

At the same time starting in the early 80s, reduction in (IBM) 4300 computer costs and increase in price/performance, saw large corporations making vm/4300 orders for hundreds of systems at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami, inside IBM, side effect of departmental conference rooms becoming scarce resource as they were converted to 4300 vm/rooms). IBM then expected that 4361/4381 would continue the explosion in 4331/4341 sales, but that market started moving to workstations and large PCs. Early 90s, large corporations could have thousands of RDBMS installs.

Trivia: long ago and far away, my wife was con'ed into going to POK to be responsible for "loosely-coupled" architecture, where she did Peer-Coupled Shared Data architecture. She didn't remain long because of 1) constant battles with the communication group trying to force her to use SNA/VTAM for loosely-coupled operation and 2) little uptake, except for IMS "hot-standby" (until much later with SYSPLEX and Parallel SYSPLEX). She has story about discussions with Vern Watts on who he would ask permission to do IMS "hot-standby", Vern said "nobody" ... he would just do it and tell them when it was all done.

Of course later we were doing HA/CMP and were working with the major (non-mainframe) RDBMS vendors (Oracle, Informix, Ingres, Sybase). It had originally started out as HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAX/Cluster to RS/6000 (the RDBMS vendors had VAX/Cluster support in the same source base as their non-mainframe implementation, so I did some tweaks to support VAX/Cluster API semantics). I then rename it HA/CMP (High Availability Cluster Multi-Processing) when start doing technical cluster scale-up with the national labs and commercial cluster scale-up with the RDBMS vendors. Old archive post with reference to Jan92 cluster scale-up meeting with Oracle CEO (16processor by mid92, 128processor by ye92)
https://www.garlic.com/~lynn/95.html#13

and then possibly within hrs of this (archived) FSD HA/CMP email
https://www.garlic.com/~lynn/2014d.html#email920129
and (archived) email about NII meeting at LLNL (national lab)
https://www.garlic.com/~lynn/2006x.html#email920129

cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors (we leave IBM a few months later). Contributing was DB2 group complaining that if we were allowed to proceed it would be at least five years ahead of them.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr
Peer-Coupled Shared Data posts
https://www.garlic.com/~lynn/submain.html#shareddata
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
virtualization experience starting Jan1968, online at home since Mar1970

big and little, Can BCD and binary multipliers share circuitry?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: big and little, Can BCD and binary multipliers share circuitry?
Newsgroups: comp.arch
Date: Sun, 01 Jan 2023 11:23:46 -1000
jgd@cix.co.uk (John Dallman) writes:
As best I recall, the OS had a trick to make them seem more responsive. Terminals were on serial lines, and the transceiver on the computer end would interrupt the CPU when there were still a few characters left in the output buffer. That let the OS schedule the relevant task and get it to start feeding more characters before the buffer emptied. This avoided pauses, which improved perceived performance.

Univ. had been sold 360/67 for TSS/360 to replace 709/1401 combo. I had taking fortran/computer intro class and end of semester was hired to port 1401 MPIO to 360/30 (which had replaced 1401 pending delivery of 360/67). The datacenter was shutdown for weekends and I got to have the whole place to myself for 48hrs (I could have trouble with monday morning class after 48hrs w/o sleep). After 360/67 arrived, I was hired fulltime responsible for OS/360 (360/67 running as 360/65 since TSS/360 never really came to production fruition).

Late 60s, science scenter installed CP67/CMS at the univ (3rd installation after the scince center itself and lincoln labs). I got to rewrite lots of CP67&CMS code. CP67 arrived with automagic terminal type recog. for 2741 & 1052 support ... but univ had some number of TTY terminals ... so I added ASCII/TTY support (integrated with auto terminal type recog).

I then wanted to do do single dial-in number for all terminal types
https://en.wikipedia.org/wiki/Line_hunting

however, the 360 terminal controller support dynamic changing the terminal type port scanner ... it had hardwired the port line speed. this prompted univ. to start clone controller project. We built 360 channel interface board for an Interdata/3 programmed to emulate a 360 terminal controller with the addition for being able to dynamically recognize terminal line-speed. This was upgraded to Interdata/4 for the channel interface and cluster of Interdata/3s for the port scanners. (Interdata and later Perkin/Elmer sold this as clone controller).

360 pcm (clone) controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

More than decade later was using 3277(&3272 controller) terminals which had .086 hardware response ... when there were various papers about increased productivity with quarter second response. I had enhanced VM/370 and had .11 system response (when comparable VM370 & workload systems were proudly claiming .25 system response). I pointed out that .086 hardware response combined with .25 system response was .336sec human response (didn't meet the quarter second response).

Then company introduced 3278(&3274 controller) with lots of electronics moved from 3278 terminal to 3274 controller reducing 3278 manufacturing costs. However it greatly increased coax cable protocol chatter and latency ... driving hardware response to .5sec. Writing complaints to 3278 product administrator got response that 3278 was targeted at interactive computing ... but data entry (aka electronic keypunch). It wasn't noticed by the MVS/TSO community since they rarely saw even one second system response.

trivia: late 70s, there was something analogous for disk response. As disks got faster, "I/O redrive" latency increasingly became larger part (idle from end of I/O operation interrupt to starting the next I/O). The 3880 disk controller was replacing 3830 disk controller. While 3880 increased data transfer support to 3mbyte/sec, the controller CPU was significantly slower (than 3830) ... the controller I/O command processing startup and ending took much longer (older 3330 disk with 3830 had higher throughput than with 3880 controller). They tried a gimmick where then would reflect end-of-IO interrupt when data transfer ended but before command processing was actually finished (assuming that the controller could finish command termination processing overlapped with system interrupt and device driver redrive processing).

I had been con'ed into spending part of my time playing disk engineer and had to show them all the problems that caused (in part because I had rewritten the OS I/O subsystem for them to be bullet proof and never fail so they could do ondemand, concurrent prototype testing, instead of prescheduled stand-alone testing; they then got in the habit of blaming me for prototype testing problems ... so I had to spend increasing amount of time diagnosing their hardware issues).

playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk

some 3272/3277 comparison with 3274/3278 posts
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2019c.html#4 3270 48th Birthday
https://www.garlic.com/~lynn/2016f.html#1 Frieden calculator
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#104 Is it a lost cause?
https://www.garlic.com/~lynn/2016d.html#42 Old Computing
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016.html#15 Dilbert ... oh, you must work for IBM
https://www.garlic.com/~lynn/2015g.html#58 [Poll] Computing favorities
https://www.garlic.com/~lynn/2014m.html#127 How Much Bandwidth do we have?
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble
https://www.garlic.com/~lynn/2014f.html#41 System Response
https://www.garlic.com/~lynn/2013g.html#14 Tech Time Warp of the Week: The 50-Pound Portable PC, 1977
https://www.garlic.com/~lynn/2012p.html#1 3270 response & channel throughput
https://www.garlic.com/~lynn/2012n.html#37 PDP-10 and Vax, was System/360--50 years--the future?
https://www.garlic.com/~lynn/2012m.html#37 Why File transfer through TSO IND$FILE is slower than TCP/IP FTP ?
https://www.garlic.com/~lynn/2012m.html#15 cp67, vm370, etc
https://www.garlic.com/~lynn/2012.html#13 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2011p.html#61 Migration off mainframe
https://www.garlic.com/~lynn/2011g.html#43 My first mainframe experience
https://www.garlic.com/~lynn/2010b.html#31 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009q.html#72 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2009q.html#53 The 50th Anniversary of the Legendary IBM 1401
https://www.garlic.com/~lynn/2009e.html#19 Architectural Diversity
https://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
https://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

--
virtualization experience starting Jan1968, online at home since Mar1970

AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY

From: Lynn Wheeler <lynn@garlic.com>
Subject: AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
Date: 01 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY

Lou left IBM to head large private-equity company buying up beltway bandits and gov. contractors and hiring prominent politicians to lobby congress to outsource gov. to their companies (work around to laws against using gov funding/payments for directly lobbying congress)

Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity

some mention of "barbarians" (aka private equity) at the capital
https://www.garlic.com/~lynn/2022c.html#40 After IBM
https://www.garlic.com/~lynn/2021f.html#7 The Rise of Private Equity
https://www.garlic.com/~lynn/2021f.html#6 Financial Engineering
https://www.garlic.com/~lynn/2021f.html#4 Study: Are You Too Nice to be Financially Successful?
https://www.garlic.com/~lynn/2021.html#51 Sacking the Capital and Honor
https://www.garlic.com/~lynn/2021.html#47 Barbarians Sacked The Capital
https://www.garlic.com/~lynn/2021.html#46 Barbarians Sacked The Capital
https://www.garlic.com/~lynn/2017i.html#51 Russian Hackers Stole NSA Data on U.S. Cyber Defense
https://www.garlic.com/~lynn/2017i.html#10 The General Who Lost 2 Wars, Leaked Classified Information to His Lover--and Retired With a $220,000 Pension

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainrame Channel Redrive

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainrame Channel Redrive
Date: 02 Jan, 2023
Blog: Linkedin
Mainframe Channel Redrive
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

some follow-on to my 1jan2023 11:23 comment to (usenet) comp.arch post
https://groups.google.com/g/comp.arch/c/6dFOsZODXTw
also archived
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
other old archived post on disk performance & system throughput
https://www.garlic.com/~lynn/2008c.html#88 CPU time differences for the same job

As disks got faster, (operating system) "I/O redrive" latency increasingly became larger part (idle from end of I/O operation interrupt to starting the next queued I/O). The 3880 disk controller was replacing 3830 disk controller. While 3880 increased data transfer support to 3mbyte/sec, the controller CPU was significantly slower (than 3830) ... the controller I/O command processing startup and ending took much longer (older 3330 disk with 3830 had higher throughput than with 3880 controller). They tried a gimmick where then would reflect end-of-IO interrupt when data transfer ended but before command processing was actually finished (assuming that the controller could finish command termination processing overlapped with system interrupt and device driver redrive handling).

I've mentioned several times that in the morph from CP67->VM370, they dropped and/or simplified a lot of features (including my dynamic adaptive resource management from undergraduate days and SMP, tightly-coupled multiprocessor support). One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters and I spent much of 1974 and 1975 adding CP67 stuff back into VM370 (for "CSC/VM"). One of the things I didn't get around to adding back in was CP67 "CHFREE" which was located in I/O interrupt handling path ... that as soon as necessary interrupt handling stuff was finished (like was there a unit check error), invoke I/O redrive (a couple hundred instructions to redrive instead of a couple thousand). For the disk engineering input/output supervisor rewrite (making it bullet proof and never fail), so they could do development/prototype ondemand, concurrent testing (instead of scheduled, stand-alone testing), I put CHFREE back in (not so much for disk engineering, but for production operation systems, had transferred to SJR and CSC/VM became SJR/VM for internal distribution).

I've claimed that major motivation for adding SSCH in 370/XA ... was the enormous MVS redrive latency ... and they could put pending queued requests (for asynchronous redrive) down into the "hardware".

past posts mentioning CHFREE
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2001k.html#3 Minimalist design (was Re: Parity - why even or odd)

playing disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

old archived email:
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

I've told the story that trout/3090 had initially configured number of channels to achieve target aggregate system throughput (based on assumption 3830->3880 just improved transfer rate to 3mbyte/sec). While 3880 tweaks somewhat masked the significantly slower 3880 processor ... the total channel busy was significantly worse than a their assumption about 3880 being a 3830 with support for (3380) 3mbyte/sec transfer rate. When they realized how bad the 3880 channel busy was going to be, they realized they would have to significantly increase the number of channels (to achieve desired system throughput). The increase in number of channels then required an additional TCM, there was joke that 3090 was going to bill the 3880 group for the increase in 3090 manufacturing cost. IBM marketing eventually respun the increase in number of 3090 channels for the 3090 making it a wonderful I/O machine (rather than counter to the significant increase for 3880 controller channel busy overhead).

some recnt posts mentioning 3880 (overhead) and no. 3090 channels
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency

Something analogous also shows up later in (IBM mainframe) z196 "peak I/O" benchmark requiring 104 "FICON" (running over 104 FCS) to get 2M IOPS (where single native FCS claiming over million IOPS each)
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/

ficon posts
https://www.garlic.com/~lynn/submisc.html#ficon

This is also analogous to the early 70s decision to make all 370s virtual memory. Basically MVT storage management was so bad that region execution sizes had to be specified four times larger than actually used. As a result a typical 1mbyte, 370/165 would only run four concurrently executing regions ... insufficient to keep machine busy and justified. Remapping MVT into 16mbyte virtual memory would allow increasing the number of concurrently executing regions by factor of four times with little or no paging.
https://www.garlic.com/~lynn/2011d.html#73

FICON
https://en.wikipedia.org/wiki/FICON
Fibre Channel
https://en.wikipedia.org/wiki/Fibre_Channel

other Fibre Channel:

Fibre Channel Protocol
https://en.wikipedia.org/wiki/Fibre_Channel_Protocol
Fibre Channel switch
https://en.wikipedia.org/wiki/Fibre_Channel_switch
Fibre Channel electrical interface
https://en.wikipedia.org/wiki/Fibre_Channel_electrical_interface
Fibre Channel over Ethernet
https://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet

posts mentioning both csc/vm and sjr/vm
https://www.garlic.com/~lynn/2022.html#128 SHARE LSRAD Report
https://www.garlic.com/~lynn/2015h.html#106 DOS descendant still lives was Re: slight reprieve on the z
https://www.garlic.com/~lynn/2015d.html#3 30 yr old email
https://www.garlic.com/~lynn/2015d.html#2 Knowledge Center Outage May 3rd
https://www.garlic.com/~lynn/2015c.html#27 30 yr old email
https://www.garlic.com/~lynn/2014g.html#85 Costs of core
https://www.garlic.com/~lynn/2014f.html#89 Real Programmers
https://www.garlic.com/~lynn/2013e.html#88 Sequence Numbrs (was 32760?
https://www.garlic.com/~lynn/2012.html#14 HONE
https://www.garlic.com/~lynn/2011m.html#41 CMS load module format
https://www.garlic.com/~lynn/2010n.html#62 When will MVS be able to use cheap dasd
https://www.garlic.com/~lynn/2010l.html#20 Old EMAIL Index
https://www.garlic.com/~lynn/2010f.html#24 Would you fight?
https://www.garlic.com/~lynn/2010d.html#70 LPARs: More or Less?
https://www.garlic.com/~lynn/2009i.html#35 SEs & History Lessons
https://www.garlic.com/~lynn/2007b.html#51 Special characters in passwords was Re: RACF - Password rules

past posts mentioning 3880 and I/O redrive
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022g.html#4 3880 DASD Controller
https://www.garlic.com/~lynn/2022f.html#95 VM I/O
https://www.garlic.com/~lynn/2022e.html#54 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2021.html#6 3880 & 3380
https://www.garlic.com/~lynn/2020.html#42 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2017g.html#64 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2016h.html#50 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#56 IBM 1401 vs. 360/30 emulation?
https://www.garlic.com/~lynn/2016b.html#79 Asynchronous Interrupts
https://www.garlic.com/~lynn/2015f.html#88 Formal definition of Speed Matching Buffer
https://www.garlic.com/~lynn/2014k.html#22 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2013n.html#69 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013n.html#56 rebuild 1403 printer chain
https://www.garlic.com/~lynn/2012p.html#17 What is a Mainframe?
https://www.garlic.com/~lynn/2012o.html#28 IBM mainframe evolves to serve the digital world
https://www.garlic.com/~lynn/2012m.html#6 Blades versus z was Re: Turn Off Another Light - Univ. of Tennessee
https://www.garlic.com/~lynn/2012c.html#23 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2012c.html#20 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2012b.html#2 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2011p.html#120 Start Interpretive Execution
https://www.garlic.com/~lynn/2011k.html#86 'smttter IBMdroids
https://www.garlic.com/~lynn/2011.html#36 CKD DASD
https://www.garlic.com/~lynn/2010e.html#30 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2009r.html#52 360 programs on a z/10
https://www.garlic.com/~lynn/2009q.html#74 Now is time for banks to replace core system according to Accenture
https://www.garlic.com/~lynn/2008d.html#52 Throwaway cores
https://www.garlic.com/~lynn/2007t.html#77 T3 Sues IBM To Break its Mainframe Monopoly
https://www.garlic.com/~lynn/2007h.html#9 21st Century ISA goals?
https://www.garlic.com/~lynn/2007h.html#6 21st Century ISA goals?
https://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
https://www.garlic.com/~lynn/2004p.html#61 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2003m.html#43 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003f.html#40 inter-block gaps on DASD tracks
https://www.garlic.com/~lynn/2002b.html#2 Microcode? (& index searching)
https://www.garlic.com/~lynn/2001h.html#28 checking some myths.
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2000c.html#75 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/96.html#19 IBM 4381 (finger-check)

--
virtualization experience starting Jan1968, online at home since Mar1970

1403 printer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 1403 printer
Date: 02 Jan, 2023
Blog: Facebook
Took two credit hr intro to fortran/computers and at end of semester got student job to reimplement 1401 MPIO on 360/30. The univ. had been sold 360/67 for TSS/360 replacing 709/1401 (709 tape->tape, 1401 unit record front end) ... and the 1401 was temporarily replaced with 360/30 (pending availability of 360/67). 360/30 had 1401 emulation so could continue to run 1401 MPIO ... but I guess they wanted to get 360 experience. The univ. shutdown the datacenter over the weekend and they let me have the whole place to myself (for weekends, although 48hrs w/o sleep could make monday morning classes difficult). I was given a bunch of hardware and software manuals and got to design my own monitor, interrupt handlers, device drivers (console, tape, 2540 reader/punch 1403), error recovery, storage management, etc. Within a few weeks had 2000 card 360 assembler program. Univ. had both 1403 and 1403N1.
https://en.wikipedia.org/wiki/IBM_1403

I quickly learned that 1st thing coming in was to clean tape drives, 1403s, take 2540 apart and clean it, etc. Also, sometimes Fri 3rd shift finished early and had powered everything off. Sometimes powering on 360/30 ... it wouldn't come up and had to figure out putting all controllers in CE mode, power on 360/30, individually power on all controllers (and then take them out of CE mode). Within year of taking intro class, I was hired fulltime responsible for OS/360 (after 360/67 came in, TSS/360 never came to production fruition so ran as 360/65 with OS/360). Then before I graduate, I'm hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into independent business unit to better monetize the investment, including offering services to non-Boeing entities).

After graduating, I leave Boeing and joined IBM cambridge science center. CSC had modified 360/40 with virtual memory hardware and developed CP40/CMS, which morphs into CP67/CMS when 360/67 becomes available (later morphs into VM370/CMS)

Some of the MIT CTSS/7094 people had gone to the 5th flr for MULTICS. Others had gone to the science center on the 4th flr. Some amount of CMS carried over from CTSS. CTSS runoff
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
was redone for CMS as "SCRIPT" ... GML was also invented at the science center in 1969 and GML tag processing was added to CMS SCRIPT.

One of the first mainstream IBM manuals done in CMS SCRIPT was 370 Architecture Manual (called "red book" for distribution in red 3-ring binder). CMS SCRIPT command option would print either the full architecture manual or the 370 Principles of Operation subset. 1403/3211 printed POO (usually) can be seen by non-proportional font. Later versions might be printed on 3800 (or other laser printer) with proportional font. However, I was a little surprised to see Mar1983 Principles Of Operation (with non-proportional font).
https://bitsavers.org/pdf/ibm/370/princOps/SA22-7085-0_370-XA_Principles_of_Operation_Mar83.pdf

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML/SGML posts
https://www.garlic.com/~lynn/submain.html#sgml

some recent posts mentioning 1401 MPIO
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#60 Fortran
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022h.html#30 Byte
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#11 360 Powerup
https://www.garlic.com/~lynn/2022f.html#8 CICS 53 Years
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#45 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022e.html#31 Technology Flashback
https://www.garlic.com/~lynn/2022e.html#0 IBM Quota
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#87 Punch Cards
https://www.garlic.com/~lynn/2022d.html#78 US Takes Supercomputer Top Spot With First True Exascale Machine
https://www.garlic.com/~lynn/2022d.html#69 Mainframe History: How Mainframe Computers Evolved Over the Years
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#45 MGLRU Revved Once More For Promising Linux Performance Improvements
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022d.html#8 Computer Server Market
https://www.garlic.com/~lynn/2022b.html#35 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#35 Error Handling
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2022.html#1 LLMPS, MPIO, DEBE
https://www.garlic.com/~lynn/2021k.html#1 PCP, MFT, MVT OS/360, VS1, & VS2
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021f.html#79 Where Would We Be Without the Paper Punch Card?
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021f.html#19 1401 MPIO
https://www.garlic.com/~lynn/2021e.html#47 Recode 1401 MPIO for 360/30
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#43 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#38 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021e.html#27 Learning EBCDIC
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class
https://www.garlic.com/~lynn/2021b.html#63 Early Computer Use
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2021.html#61 Mainframe IPL
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#32 IBM TSS

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainrame Channel Redrive

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainrame Channel Redrive
Date: 03 Jan, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive

I've commented that original 360 CKD ... and "multi-track search", traded off abundant I/O capacity for relatively constrained processor and real memory ... however by the mid-70s that trade-off was beginning to invert. In the early 80s, I had written a tome that disk "relative system throughput" had declined by an order of magnitude (factor of ten times) since the 60s ... aka disk throughput had increased 3-5 times while processor speed and real storage size had increased by 40-50 times. A GPD (disk division) took exception and assigned the division performance group to refute my claim. After a couple week, they basically said that I had slightly understated the problem. They then respun the analysis for a (mainframe user group) SHARE presentation on configuring disks for improved system throughput (SHARE 63, B874, 16Aug1984).

playing disk engineer in bldg14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

misc. past archived posts referencing B874 presentation
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022f.html#0 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#49 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022d.html#48 360&370 I/O Channels
https://www.garlic.com/~lynn/2022.html#92 Processor, DASD, VTAM & TCP/IP performance
https://www.garlic.com/~lynn/2022.html#70 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#131 Multitrack Search Performance
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#78 IBM 370 and Future System
https://www.garlic.com/~lynn/2021g.html#44 iBM System/3 FORTRAN for engineering/science work?
https://www.garlic.com/~lynn/2021f.html#53 3380 disk capacity
https://www.garlic.com/~lynn/2021.html#79 IBM Disk Division
https://www.garlic.com/~lynn/2021.html#59 San Jose bldg 50 and 3380 manufacturing
https://www.garlic.com/~lynn/2019b.html#94 MVS Boney Fingers
https://www.garlic.com/~lynn/2019.html#78 370 virtual memory
https://www.garlic.com/~lynn/2018e.html#93 It's 1983: What computer would you buy?
https://www.garlic.com/~lynn/2018c.html#30 Bottlenecks and Capacity planning
https://www.garlic.com/~lynn/2017j.html#96 thrashing, was Re: A Computer That Never Was: the IBM 7095
https://www.garlic.com/~lynn/2017f.html#28 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#5 TSS/8, was A Whirlwind History of the Computer
https://www.garlic.com/~lynn/2017d.html#61 Paging subsystems in the era of bigass memory
https://www.garlic.com/~lynn/2017b.html#32 Virtualization's Past Helps Explain Its Current Importance
https://www.garlic.com/~lynn/2016g.html#40 Floating point registers or general purpose registers
https://www.garlic.com/~lynn/2016e.html#38 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#68 Raspberry Pi 3?
https://www.garlic.com/~lynn/2016d.html#21 What was a 3314?
https://www.garlic.com/~lynn/2016c.html#12 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014m.html#87 Death of spinning disk?
https://www.garlic.com/~lynn/2014l.html#90 What's the difference between doing performance in a mainframe environment versus doing in others
https://www.garlic.com/~lynn/2014b.html#49 Mac at 30: A love/hate relationship from the support front
https://www.garlic.com/~lynn/2013m.html#72 'Free Unix!': The world-changing proclamation made 30 years agotoday
https://www.garlic.com/~lynn/2011p.html#32 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011p.html#5 Why are organizations sticking with mainframes?
https://www.garlic.com/~lynn/2011g.html#59 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2010l.html#32 OS idling
https://www.garlic.com/~lynn/2010l.html#31 Wax ON Wax OFF -- Tuning VSAM considerations
https://www.garlic.com/~lynn/2010c.html#1 "The Naked Mainframe" (Forbes Security Article)
https://www.garlic.com/~lynn/2009l.html#67 ACP, One of the Oldest Open Source Apps
https://www.garlic.com/~lynn/2009k.html#52 Hercules; more information requested
https://www.garlic.com/~lynn/2009k.html#34 A Complete History Of Mainframe Computing
https://www.garlic.com/~lynn/2007s.html#9 Poster of computer hardware events?
https://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?
https://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)
https://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
https://www.garlic.com/~lynn/2002i.html#46 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please

--
virtualization experience starting Jan1968, online at home since Mar1970

Leopards Eat Kevin McCarthy's Face

From: Lynn Wheeler <lynn@garlic.com>
Subject: Leopards Eat Kevin McCarthy's Face
Date: 04 Jan, 2023
Blog: Facebook
Leopards Eat Kevin McCarthy's Face
https://www.nytimes.com/2023/01/04/opinion/kevin-mccarthy-speaker-race.html
Kevin McCarthy nurtured the spirit of reactionary nihilism in the Republican Party, first by trying to harness the energy of the Tea Party for his own ambition, and then by his near-total capitulation to Donald Trump. Now the chaotic forces he abetted have, at least for the moment, derailed his goal of becoming House speaker, subjecting him to multiple public humiliations at what was supposed to be his moment of triumph.

How Far Right Are the 20 Republicans Who Voted Against McCarthy?
https://www.nytimes.com/interactive/2023/01/04/us/politics/house-speaker-republicans-vote-against-mccarthy.html
Live Vote Count: Tracking the House Speaker Votes
https://www.nytimes.com/interactive/2023/01/04/us/politics/house-speaker-vote-tally.html
Who Are the Republicans Opposing McCarthy's Speaker Bid
https://www.nytimes.com/2023/01/03/us/politics/kevin-mccarthy-republican-opposition.html

fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
tax fraud, tax evasion, tax loopholes, tax avoidance, tax havens posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

past archived posts mentioning "tea party"
https://www.garlic.com/~lynn/2022h.html#79 The GOP wants to cut funding to the IRS. We can't let that happen
https://www.garlic.com/~lynn/2022g.html#1 A Second Constitutional Convention?
https://www.garlic.com/~lynn/2022f.html#52 Background on some recent major budget items
https://www.garlic.com/~lynn/2021j.html#61 Tax Evasion and the Republican Party
https://www.garlic.com/~lynn/2021i.html#22 The top 1 percent are evading $163 billion a year in taxes, the Treasury finds
https://www.garlic.com/~lynn/2021i.html#13 Companies Lobbying Against Infrastructure Tax Increases Have Avoided Paying Billions in Taxes
https://www.garlic.com/~lynn/2021g.html#54 Republicans Have Taken a Brave Stand in Defense of Tax Cheats
https://www.garlic.com/~lynn/2021f.html#49 The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax
https://www.garlic.com/~lynn/2021f.html#39 'Bipartisanship' Is Dead in Washington
https://www.garlic.com/~lynn/2021e.html#93 Treasury calls for doubling IRS staff to target tax evasion, crypto transfers
https://www.garlic.com/~lynn/2021e.html#29 US tax plan proposes massive overhaul to audit high earners and corporations for tax evasion
https://www.garlic.com/~lynn/2021e.html#1 Rich Americans Who Were Warned on Taxes Hunt for Ways Around Them
https://www.garlic.com/~lynn/2019e.html#152 US lost more tax revenue than any other developed country in 2018 due to Trump tax cuts
https://www.garlic.com/~lynn/2019d.html#99 Trump claims he's the messiah. Maybe he should quit while he's ahead
https://www.garlic.com/~lynn/2018f.html#22 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018f.html#20 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018f.html#19 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018f.html#15 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018f.html#11 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018f.html#9 A Tea Party Movement to Overhaul the Constitution Is Quietly Gaining
https://www.garlic.com/~lynn/2018e.html#14 On The Deficit, GOP Has Been Playing Us All For Suckers
https://www.garlic.com/~lynn/2018d.html#88 The Pentagon Can't Account for $21 Trillion (That's Not a Typo)
https://www.garlic.com/~lynn/2017d.html#2 Single Payer
https://www.garlic.com/~lynn/2016f.html#95 Chain of Title: How Three Ordinary Americans Uncovered Wall Street's Great Foreclosure Fraud
https://www.garlic.com/~lynn/2016e.html#105 Washington Corruption
https://www.garlic.com/~lynn/2016c.html#14 Qbasic
https://www.garlic.com/~lynn/2016b.html#102 Qbasic
https://www.garlic.com/~lynn/2016b.html#65 Qbasic
https://www.garlic.com/~lynn/2014d.html#64 Wells Fargo made up on-demand foreclosure papers plan: court filing charges
https://www.garlic.com/~lynn/2014.html#69 Pensions, was Re: Royal Pardon For Turing
https://www.garlic.com/~lynn/2014.html#60 Pensions, was Re: Royal Pardon For Turing
https://www.garlic.com/~lynn/2013k.html#52 The agency problem and how to create a criminogenic environment
https://www.garlic.com/~lynn/2013k.html#47 The Incredible Con the Banksters Pulled on the FBI
https://www.garlic.com/~lynn/2013k.html#45 What Makes a Tax System Bizarre?
https://www.garlic.com/~lynn/2013k.html#11 What Makes Infrastructure investment not bizarre
https://www.garlic.com/~lynn/2013b.html#63 NBC's website hacked with malware
https://www.garlic.com/~lynn/2011o.html#80 How Pursuit of Profits Kills Innovation and the U.S. Economy

--
virtualization experience starting Jan1968, online at home since Mar1970

Ponzi Hospitals and Counterfeit Capitalism

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Ponzi Hospitals and Counterfeit Capitalism
Date: 05 Jan, 2023
Blog: Facebook
Ponzi Hospitals and Counterfeit Capitalism. The end of cheap money in our monopoly-heavy economy is going to make things very weird. Big private equity shops could be in trouble.
https://mattstoller.substack.com/p/ponzi-hospitals-and-counterfeit-capitalism

... from nakedcaoitalism.com: If private equity is pulling as much cash as it can out of hospitals, then Hospital Infection Control administrators will respond to that. Hence, cheap masks or no masks, no HEPA, no ventilation improvements beyond existing isolation wards, etc., and a captured CDC writing its guidance to accommodate them. Paradigm shifts cost money, even beyond the cost ot careers!

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
regulartory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Riddle solved: Why was Roman concrete so durable?

From: Lynn Wheeler <lynn@garlic.com>
Subject: Riddle solved: Why was Roman concrete so durable?
Date: 07 Jan, 2023
Blog: Facebook
Riddle solved: Why was Roman concrete so durable? An unexpected ancient manufacturing strategy may hold the key to designing concrete that lasts for millennia.
https://news.mit.edu/2023/roman-concrete-durability-lime-casts-0106
Hot mixing: Mechanistic insights into the durability of ancient Roman concrete
https://www.science.org/doi/10.1126/sciadv.add1602
Why Roman concrete outlasts its modern counterpart
https://www.cnn.com/style/article/roman-concrete-mystery-ingredient-scn/index.html
How to Make Roman Concrete, One of Human Civilization's Longest-Lasting Building Materials
https://www.openculture.com/2022/12/how-to-make-roman-concrete-one-of-human-civilizations-longest-lasting-building-materials.html

past posts
https://www.garlic.com/~lynn/2021g.html#9 Miami Building Collapse Could Profoundly Change Engineering
https://www.garlic.com/~lynn/2017g.html#45 The most important invention from every state
https://www.garlic.com/~lynn/2013h.html#78 IBM commitment to academia

When we lived in Annapolis we use to walk around the inside perimeter of the academy a couple times a week ... spring/summer 2013 there was major project to rebuild the 30-40yr old concrete sea walls that were rapidly deteriorating ... from 2013 post:

... past couple months there has been lots of divers doing repair work on the seawall on the perimeter of the naval academy ... workers say that the concrete has significant erosion. Its only something like 30-40yrs old ... this compares to sea structures made from Roman concrete that have survived for 2000yrs "Ancient Roman Concrete Is About to Revolutionize Modern Architecture"
http://www.businessweek.com/articles/2013-06-14/ancient-roman-concrete-is-about-to-revolutionize-modern-architecture
The most common blend of modern concrete, known as Portland cement, a formulation in use for nearly 200 years, can't come close to matching that track record, says Marie Jackson, a research engineer at the University of California at Berkeley who was part of the Roman concrete research team. "The maritime environment, in particular, is not good for Portland concrete. In seawater, it has a service life of less than 50 years. After that, it begins to erode," Jackson says.

... snip ...

Roman Seawater Concrete Holds the Secret to Cutting Carbon Emissions
http://newscenter.lbl.gov/2013/06/04/roman-concrete/

--
virtualization experience starting Jan1968, online at home since Mar1970

History Is Un-American. Real Americans Create Their Own Futures

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: History Is Un-American. Real Americans Create Their Own Futures
Date: 08 Jan, 2023
Blog: Facebook
History Is Un-American. Real Americans Create Their Own Futures
https://www.linkedin.com/pulse/history-un-american-real-americans-create-own-futures-lynn-wheeler/
and
https://bracingviews.com/2023/01/04/history-is-un-american/
Karl Rove, a major player in the Bush/Cheney administration, summed up this hubris in this--now-infamous passage: "We're an empire now, and when we act, we create our own reality. And while you're studying that reality -- judiciously, as you will -- we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors . . . and you, all of you, will be left to just study what we do."

Related to the idea of history being un-American is the business- and management-oriented nature of the officer corps in the U.S. military. To be promoted to field-grade (major or lieutenant commander), you almost have to have a master's degree or be close to finishing one. But rarely do officers choose to pursue a master's in history or any other subject related to the humanities. The master's of choice is in business administration or some type of management.

By pursuing MBAs and management degrees, officers show their practical nature.


... snip ...

I was introduced to John Boyd in the early 80s and would sponsor his briefings. He would comment that former military officers, steeped in rigid, top-down command and control were beginning to contaminate US corporate culture. Originated for WW2 where US had to deploy millions and had to rely on very few with military experience, observing that US required 11% officers (growing to 20%) while the Germans had 3% (or less). However about the same time, there started to appear articles that MBAs were starting to destroy US corporations (with their myopic focus on short term results). Longer discussion
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/

archived Boyd posts
https://www.garlic.com/~lynn/subboyd.html

Wars and More Wars: The Sorry U.S. History in the Middle East
https://www.counterpunch.org/2022/12/30/wars-and-more-wars-the-sorry-u-s-history-in-the-middle-east/

The World Crisis, Vol. 1, Churchill explains the mess in middle east started with move from 13.5in to 15in Naval guns (leading to moving from coal to oil)
https://www.amazon.com/Crisis-1911-1914-Winston-Churchill-Collection-ebook/dp/B07H18FWXR/
loc2012-14:
From the beginning there appeared a ship carrying ten 15-inch guns, and therefore at least 600 feet long with room inside her for engines which would drive her 21 knots and capacity to carry armour which on the armoured belt, the turrets and the conning tower would reach the thickness unprecedented in the British Service of 13 inches.

loc2087-89:
To build any large additional number of oil-burning ships meant basing our naval supremacy upon oil. But oil was not found in appreciable quantities in our islands. If we required it, we must carry it by sea in peace or war from distant countries.

loc2151-56:
This led to enormous expense and to tremendous opposition on the Naval Estimates. Yet it was absolutely impossible to turn back. We could only fight our way forward, and finally we found our way to the Anglo-Persian Oil agreement and contract, which for an initial investment of two millions of public money (subsequently increased to five millions) has not only secured to the Navy a very substantial proportion of its oil supply, but has led to the acquisition by the Government of a controlling share in oil properties and interests which are at present valued at scores of millions sterling, and also to very considerable economies, which are still continuing, in the purchase price of Admiralty oil.

... snip ...

When the newly elected democratic government wanted to review the Anglo-Persian contract, US arranged coup and backed Shah as front
https://unredacted.com/2018/03/19/cia-caught-between-operational-security-and-analytical-quality-in-1953-iran-coup-planning/
https://en.wikipedia.org/wiki/Kermit_Roosevelt,_Jr%2E
https://en.wikipedia.org/wiki/1953_Iranian_coup_d%27%C3%A9tat
... and Schwarzkoph (senior) training of the secret police to help keep Shah in power
https://en.wikipedia.org/wiki/SAVAK
Savak Agent Describes How He Tortured Hundreds
https://www.nytimes.com/1979/06/18/archives/savak-agent-describes-how-he-tortured-hundreds-trial-is-in-a-mosque.html

Iran people eventually revolt against the horribly oppressive, (US backed) autocratic government.

CIA Director Colby wouldn't approve the "Team B" analysis (exaggerated USSR military capability) and Rumsfeld got Colby replaced with Bush, who would approve "Team B" analysis (justifying huge DOD spending increase), after Rumsfeld replaces Colby, he resigns as white house chief of staff to become SECDEF (and is replaced by his assistant Cheney)
https://en.wikipedia.org/wiki/Team_B
Then in the 80s, former CIA director H.W. is VP, he and Rumsfeld are involved in supporting Iraq in the Iran/Iraq war
http://en.wikipedia.org/wiki/Iran%E2%80%93Iraq_War
including WMDs (note picture of Rumsfeld with Saddam)
http://en.wikipedia.org/wiki/United_States_support_for_Iraq_during_the_Iran%E2%80%93Iraq_war

VP and former CIA director repeatedly claims no knowledge of
http://en.wikipedia.org/wiki/Iran%E2%80%93Contra_affair
because he was fulltime administration point person deregulating financial industry ... creating S&L crisis
http://en.wikipedia.org/wiki/Savings_and_loan_crisis
along with other members of his family
http://en.wikipedia.org/wiki/Savings_and_loan_crisis#Silverado_Savings_and_Loan
and another
http://query.nytimes.com/gst/fullpage.html?res=9D0CE0D81E3BF937A25753C1A966958260

In the early 90s, H.W. is president and Cheney is SECDEF. Sat. photo recon analyst told white house that Saddam was marshaling forces to invade Kuwait. White house said that Saddam would do no such thing and proceeded to discredit the analyst. Later the analyst informed the white house that Saddam was marshaling forces to invade Saudi Arabia, now the white house has to choose between Saddam and the Saudis.
https://www.amazon.com/Long-Strange-Journey-Intelligence-ebook/dp/B004NNV5H2/

... roll forward ... Bush2 is president and presides over the huge cut in taxes, huge increase in spending, explosion in debt, the economic mess (70 times larger than his father's S&L crisis) and the forever wars, Cheney is VP, Rumsfeld is SECDEF and one of the Team B members is deputy SECDEF (and major architect of Iraq policy).
https://en.wikipedia.org/wiki/Paul_Wolfowitz

Before the Iraq invasion, the cousin of white house chief of staff Card ... was dealing with the Iraqis at the UN and was given evidence that WMDs (tracing back to US in the Iran/Iraq war) had been decommissioned. the cousin shared it with (cousin, white house chief of staff) Card and others ... then is locked up in military hospital, book was published in 2010 (4yrs before decommissioned WMDs were declassified)
https://www.amazon.com/EXTREME-PREJUDICE-Terrifying-Story-Patriot-ebook/dp/B004HYHBK2/

NY Times series from 2014, the decommission WMDs (tracing back to US from Iran/Iraq war), had been found early in the invasion, but the information was classified for a decade
http://www.nytimes.com/interactive/2014/10/14/world/middleeast/us-casualties-of-iraq-chemical-weapons.html

note the military-industrial complex had wanted a war so badly that corporate reps were telling former eastern block countries that if they voted for IRAQ2 invasion in the UN, they would get membership in NATO and (directed appropriation) USAID (can *ONLY* be used for purchase of modern US arms, aka additional congressional gifts to MIC complex not in DOD budget). From the law of unintended consequences, the invaders were told to bypass ammo dumps looking for WMDs, when they got around to going back, over a million metric tons had evaporated (showing up later in IEDs)
https://www.amazon.com/Prophets-War-Lockheed-Military-Industrial-ebook/dp/B0047T86BA/

... from truth is stranger than fiction and law of unintended consequences that come back to bite you, much of the radical Islam & ISIS can be considered our own fault, VP Bush in the 80s
https://www.amazon.com/Family-Secrets-Americas-Invisible-Government-ebook/dp/B003NSBMNA/
pg292/loc6057-59:
There was also a calculated decision to use the Saudis as surrogates in the cold war. The United States actually encouraged Saudi efforts to spread the extremist Wahhabi form of Islam as a way of stirring up large Muslim communities in Soviet-controlled countries. (It didn't hurt that Muslim Soviet Asia contained what were believed to be the world's largest undeveloped reserves of oil.)

... snip ...

Saudi radical extremist Islam/Wahhabi loosened on the world ... bin Laden & 15of16 9/11 were Saudis (some claims that 95% of extreme Islam world terrorism is Wahhabi related)
https://en.wikipedia.org/wiki/Wahhabism

Mattis somewhat more PC (political correct)
https://www.amazon.com/Call-Sign-Chaos-Learning-Lead-ebook/dp/B07SBRFVNH/
pg21/loc349-51:
Ayatollah Khomeini's revolutionary regime took hold in Iran by ousting the Shah and swearing hostility against the United States. That same year, the Soviet Union was pouring troops into Afghanistan to prop up a pro-Russian government that was opposed by Sunni Islamist fundamentalists and tribal factions. The United States was supporting Saudi Arabia's involvement in forming a counterweight to Soviet influence.

... snip ...

and internal CIA
https://www.amazon.com/Permanent-Record-Edward-Snowden-ebook/dp/B07STQPGH6/
pg133/loc1916-17:
But al-Qaeda did maintain unusually close ties with our allies the Saudis, a fact that the Bush White House worked suspiciously hard to suppress as we went to war with two other countries.

... snip ...

The Danger of Fibbing Our Way into War. Falsehoods and fat military budgets can make conflict more likely
https://web.archive.org/web/20200317032532/https://www.pogo.org/analysis/2020/01/the-danger-of-fibbing-our-way-into-war/
The Day I Realized I Would Never Find Weapons of Mass Destruction in Iraq
https://www.nytimes.com/2020/01/29/magazine/iraq-weapons-mass-destruction.html

The Deep State (US administration behind formation of ISIS)
https://www.amazon.com/Deep-State-Constitution-Shadow-Government-ebook/dp/B00W2ZKIQM/
pg190/loc3054-55:
In early 2001, just before George W. Bush's inauguration, the Heritage Foundation produced a policy document designed to help the incoming administration choose personnel

pg191/loc3057-58:
In this document the authors stated the following: "The Office of Presidential Personnel (OPP) must make appointment decisions based on loyalty first and expertise second,

pg191/loc3060-62:
Americans have paid a high price for our Leninist personnel policies, and not only in domestic matters. In important national security concerns such as staffing the Coalition Provisional Authority, a sort of viceroyalty to administer Iraq until a real Iraqi government could be formed, the same guiding principle of loyalty before competence applied.

... snip ...

... including kicked hundreds of thousands of former soldiers out on the streets created ISIS ... and bypassing the ammo dumps (looking for fictitious/fabricated WMDs) gave them over a million metric tons (for IEDs).

Team B posts
https://www.garlic.com/~lynn/submisc.html#team.b
S&L crisis posts
https://www.garlic.com/~lynn/submisc.html#s&l.crisis
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
WMD posts
https://www.garlic.com/~lynn/submisc.html#wmds

recent related
https://www.linkedin.com/pulse/price-wars-lynn-wheeler/
https://www.linkedin.com/pulse/price-wars-part-ii-lynn-wheeler/

some archived posts with "price wars" refs
https://www.garlic.com/~lynn/2022e.html#106 Price Wars
https://www.garlic.com/~lynn/2022e.html#107 Price Wars
https://www.garlic.com/~lynn/2022f.html#21 Price Wars
https://www.garlic.com/~lynn/2022g.html#20 9/11
https://www.garlic.com/~lynn/2022g.html#24 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022g.html#80 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd

economic mess posts
https://www.garlic.com/~lynn/submisc.html#economic.mess
Pecora &/or Glass-Steagall post
https://www.garlic.com/~lynn/submisc.html#Pecora&/orGlass-Steagall
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
Too Big To Fail (Too Big To Prosecute, Too Big To Jail)
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
Toxic CDO posts
https://www.garlic.com/~lynn/submisc.html#toxic.cdo
GRIFTOPIA posts
https://www.garlic.com/~lynn/submisc.html#griftopia
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Loses Top Patent Spot After Decades as Leader

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Loses Top Patent Spot After Decades as Leader
Date: 09 Jan, 2023
Blog: Facebook
after seeing this reference

Why IBM is no longer interested in breaking patent records-and how it plans to measure innovation in the age of open source and quantum computing
https://fortune.com/2023/01/06/ibm-patent-record-how-to-measure-innovation-open-source-quantum-computing-tech/

I wondered what was cause and what was effect, if it shifted because it lost top spot or if it lost top part because it shifted?

IBM Loses Top Patent Spot After Decades as Leader
https://www.bloomberg.com/news/articles/2023-01-06/ibm-loses-top-patent-spot-after-decades-as-ip-leader

I first saw the IBM article about shifting strategy from top spots in patents and conjectured that was publicity spin that they had lost top spot in patents because they no longer spent the money (and purposeful strategy sounded better), i.e. money going to stock buybacks and dividends). Recent posts over on linkedin

IBM Breakup
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/
IBM Downfall
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
failure in attempt to save IBM from the careerists and bureaucrats
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

archived posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
stock buyback posts
https://www.garlic.com/~lynn/submisc.html##stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Marketing, Sales, Branch Offices

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Marketing, Sales, Branch Offices
Date: 09 Jan, 2023
Blog: Facebook
As undergraduate, within a year of taking 2credit hr intro to fortran/computers, univ. hires me fulltime responsible for os/360, then before I graduate, I'm hired fulltime into small group in Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thot Renton datacenter was possibly largest in the world, couple hundred million in 360s stuff ... 360/65s arriving faster than they could be installed, boxes constantly being staged in the hallways around the machine room. Both Boeing people and IBM account team tell story about 360 announcement day ... Boeing walks into the marketing rep (who hardly knows what 360 is) and places a large 360 order ... this was in days of straight commission ... claimed that it made the marketing rep highest paid IBMer that year. The following year, IBM transitions from commission to quota. Jan, Boeing makes another large order ... making the rep's quota for the year. His quota then gets "adjusted" ... he leaves IBM shortly later.

In the early 80s, I'm introduced to John Boyd and sponsored his briefings at IBM. His biographies mention that he was put in command of "spook base" (about the same time I was at Boeing) which was a $2.5B windfall for IBM (ten times Boeing). Some of IBM account there
https://www.amazon.com/When-Big-Blue-Went-War-ebook/dp/B07923TFH5/

After I graduated, I joined the IBM Cambridge Science Center (instead of staying at Boeing). One of my hobbies was enhanced production operating systems for internal datacenters (including online sales&marketing support HONE systems were long time customer) ... I also got to attend user group meetings and drop in on customers. Data center manager for one of the largest financial industry IBM accounts, use to like me to drop in and talk technology. Long winded account including the branch manager horribly offending the customer and in retribution they were ordering an Amdahl machine (first for true-blue commercial account, up until then Amdahl had only sold into technical/scientific/univ market), I was suppose to help obfuscate the reason for the Amdahl order by going onsite for a year ... more details in this long winded account
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

In 1975, somehow AT&T longlines was allowed to get a copy of one of my internal VM370 systems. Note in the morph from CP67->VM370, a lot of features were dropped (including tightly coupled multiprocessor support) and/or greatly simplified. I spent much of 1974 upgrading VM370 with CP67 enhancements and somehow AT&T acquired one of those systems. This was before I had reimplemented multiprocessor support, originally for the US HONE complex (online sales&marketing support HONE was longtime customer, predating VM370 days). In the early 80s, the AT&T IBM account rep tracks me down at San Jose Research wanting help with AT&T and my 1975 VM370 system. Turns out they had propagated my internal VM370 system around AT&T ... moving to the latest 370 models. However in the early 80s, the next new IBM machine was 3081 which was multiprocessor only and IBM was afraid that AT&T (as well as ACP/TPF market, ACP/TPF at the time didn't have multiprocessor support) would all move to Amdahl (which had single processor machine about the same performance as two-processor 3081K).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
Boyd posts & URL refs
https://www.garlic.com/~lynn/subboyd.html
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some past posts mentioning AT&T longlines
https://www.garlic.com/~lynn/2022.html#101 Online Computer Conferencing
https://www.garlic.com/~lynn/2021k.html#63 1973 Holmdel IBM 370's
https://www.garlic.com/~lynn/2019d.html#121 IBM Acronyms
https://www.garlic.com/~lynn/2017k.html#33 Bad History
https://www.garlic.com/~lynn/2017d.html#80 Mainframe operating systems?
https://www.garlic.com/~lynn/2017d.html#48 360 announce day
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2015c.html#27 30 yr old email
https://www.garlic.com/~lynn/2015.html#85 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2013b.html#37 AT&T Holmdel Computer Center films, 1973 Unix
https://www.garlic.com/~lynn/2012f.html#59 Hard Disk Drive Construction
https://www.garlic.com/~lynn/2011g.html#7 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2008l.html#82 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2008i.html#14 DASD or TAPE attached via TCP/IP
https://www.garlic.com/~lynn/2008.html#41 IT managers stymied by limits of x86 virtualization
https://www.garlic.com/~lynn/2008.html#30 hacked TOPS-10 monitors
https://www.garlic.com/~lynn/2008.html#29 Need Help filtering out sporge in comp.arch
https://www.garlic.com/~lynn/2007v.html#15 folklore indeed
https://www.garlic.com/~lynn/2007u.html#6 Open z/Architecture or Not
https://www.garlic.com/~lynn/2007g.html#54 The Perfect Computer - 36 bits?
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor
https://www.garlic.com/~lynn/2005p.html#31 z/VM performance
https://www.garlic.com/~lynn/2004m.html#58 Shipwrecks
https://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes
https://www.garlic.com/~lynn/2003d.html#46 unix
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2002p.html#23 Cost of computing in 1958?
https://www.garlic.com/~lynn/2002i.html#32 IBM was: CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2001f.html#3 Oldest program you've written, and still in use?
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/95.html#14 characters

--
virtualization experience starting Jan1968, online at home since Mar1970

NCSS and Dun & Bradstreet

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: NCSS and Dun & Bradstreet
Date: 09 Jan, 2023
Blog: Facebook
D&B trivia; one of the CP67 spinoffs of the IBM Cambridge Science Center in the 60s was NCSS ... which is later bought by Dun & Bradstreet
https://en.wikipedia.org/wiki/National_CSS
above mentions
https://en.wikipedia.org/wiki/Nomad_software

even before SQL (& RDBMS) originally done on VM370/CMS (aka System/R at IBM SJR, later tech transfer to Endicott for SQL/DS and to STL for DB2) there were other "4th Generation Languages", one of the original 4th generation languages, Mathematica made available through NCSS (cp67/cms online precursor to vm37/cms)
http://www.decosta.com/Nomad/tales/history.html
One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a report that would have taken many hundreds of lines of Cobol to produce. The product grew in capability and in revenue, both to NCSS and to Mathematica, who enjoyed increasing royalty payments from the sizable customer base. FOCUS from Information Builders, Inc (IBI), did even better, with revenue approaching a reported $150M per year. RAMIS moved among several owners, ending at Computer Associates in 1990, and has had little limelight since. NOMAD's owners, Thomson, continue to market the language from Aonix, Inc. While the three continue to deliver 10-to-1 coding improvements over the 3GL alternatives of Fortran, Cobol, or PL/1, the movements to object orientation and outsourcing have stagnated acceptance.

... snip ...

other history
https://en.wikipedia.org/wiki/Ramis_software
When Mathematica (also) makes Ramis available to TYMSHARE for their VM370-based commercial online service, NCSS does their own version

https://en.wikipedia.org/wiki/Nomad_software and then follow-on FOCUS from IBI
https://en.wikipedia.org/wiki/FOCUS
Information Builders's FOCUS product began as an alternate product to Mathematica's RAMIS, the first Fourth-generation programming language (4GL). Key developers/programmers of RAMIS, some stayed with Mathematica others left to form the company that became Information Builders, known for its FOCUS product

... snip ...

4th gen programming language
https://en.wikipedia.org/wiki/Fourth-generation_programming_language

this mentions "first financial language" at IDC (another 60s cp67/cms spinoff from the IBM cambridge science center)
https://www.computerhistory.org/collections/catalog/102658182
as an aside, a decade later, person doing FFL joins with another to form startup and does the original spreadsheet
https://en.wikipedia.org/wiki/VisiCalc

TYMSHARE topic drift ...
https://en.wikipedia.org/wiki/Tymshare
In Aug1976, Tymshare started offering its CMS-based online computer conferencing free to (user group) SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
commerical, online, virtual machine service bureaus
https://www.garlic.com/~lynn/submain.html#online

some recent archived ramis/nomad posts
https://www.garlic.com/~lynn/2022f.html#116 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#49 4th generation language
https://www.garlic.com/~lynn/2021k.html#92 Cobol and Jean Sammet
https://www.garlic.com/~lynn/2021g.html#23 report writer alternatives
https://www.garlic.com/~lynn/2021f.html#67 RDBMS, SQL, QBE
https://www.garlic.com/~lynn/2021c.html#29 System/R, QBE, IMS, EAGLE, IDEA, DB2
https://www.garlic.com/~lynn/2019d.html#16 The amount of software running on traditional servers is set to almost halve in the next 3 years amid the shift to the cloud, and it's great news for the data center business
https://www.garlic.com/~lynn/2019d.html#4 IBM Midrange today?
https://www.garlic.com/~lynn/2018d.html#3 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2018c.html#85 z/VM Live Guest Relocation
https://www.garlic.com/~lynn/2017j.html#39 The complete history of the IBM PC, part two: The DOS empire strikes; The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS
https://www.garlic.com/~lynn/2017j.html#29 Db2! was: NODE.js for z/OS
https://www.garlic.com/~lynn/2017c.html#85 Great mainframe history(?)
https://www.garlic.com/~lynn/2017.html#28 {wtf} Tymshare SuperBasic Source Code
https://www.garlic.com/~lynn/2016e.html#107 some computer and online history
https://www.garlic.com/~lynn/2015h.html#27 the legacy of Seymour Cray
https://www.garlic.com/~lynn/2014k.html#40 How Larry Ellison Became The Fifth Richest Man In The World By Using IBM's Idea
https://www.garlic.com/~lynn/2014j.html#101 Flat (VSAM or other) files still in use?
https://www.garlic.com/~lynn/2014i.html#32 Speed of computers--wave equation for the copper atom? (curiosity)
https://www.garlic.com/~lynn/2014e.html#34 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014c.html#77 Bloat
https://www.garlic.com/~lynn/2013m.html#62 Google F1 was: Re: MongoDB
https://www.garlic.com/~lynn/2013g.html#16 Old data storage or data base
https://www.garlic.com/~lynn/2013f.html#63 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#57 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2013c.html#56 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#30 General Mills computer
https://www.garlic.com/~lynn/2012e.html#84 Time to competency for new software language?
https://www.garlic.com/~lynn/2012d.html#51 From Who originated the phrase "user-friendly"?
https://www.garlic.com/~lynn/2012b.html#60 Has anyone successfully migrated off mainframes?

--
virtualization experience starting Jan1968, online at home since Mar1970

360 Announce and then the Future System Disaster

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 360 Announce and then the Future System Disaster
Date: 09 Jan, 2023
Blog: Facebook
and then the 70s, Future System disaster, from Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

there are claims that if any other computer company had a failed project the magnitude of "Future System", it would have driven them into bankruptcy. more FS info
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

more on the bureaucrats, careerists, and MBAs destroying Watson legacy and IBM
https://www.linkedin.com/pulse/more-john-boyd-ooda-loop-lynn-wheeler/
also the "Rise and Fall" of IBM
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Marketing, Sales, Branch Offices

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Marketing, Sales, Branch Offices
Date: 10 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices

Boyd would talk being very vocal about the electronic sensors on the trail wouldn't work, possibly as punishment he was put in command of Spook Base. Spook Base reference ... gone 404, but lives on at wayback machine ... includes references to "Quacker" (pilotless drone).
https://web.archive.org/web/20030212092342/http://home.att.net/~c.jeppeson/igloo_white.html
also
https://en.wikipedia.org/wiki/Operation_Igloo_White

Boyd posts and web references
https://www.garlic.com/~lynn/subboyd.html

one of the IBMers that was in SE Asia (out of the Hawaii office) has commented (in this group) in the past that they did get to go to 100% club ... but weren't singled out as the #1 that year ... significant more than the ones were singled out ... conjecture was that IBM didn't want to highlight how much earnings was coming from that office.

past refs:
https://www.garlic.com/~lynn/2022e.html#36 IBM 23June1969 Unbundle
https://www.garlic.com/~lynn/2021e.html#80 Amdahl
https://www.garlic.com/~lynn/2021.html#49 IBM Quota
https://www.garlic.com/~lynn/2019e.html#77 Collins radio 1956

Online sales&marketing support HONE system posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

INTEROP 88 Santa Clara

From: Lynn Wheeler <lynn@garlic.com>
Subject: INTEROP 88 Santa Clara
Date: 10 Jan, 2023
Blog: Facebook
INTEROP 88 Santa Clara

Remember flr nets crashing night before start ... resulted recommendations in rfc1122 ... also way too much gosip in booths

I had workstation in (non-IBM) booth at immediate right angles to the SUN booth. Case was in the SUN booth demo'ing SNMP ... manage to convince him to come over and install SNMP.

re: gosip; that is GOSIP (Government OSI Profile) ... not the other kind ... some agencies were mandating elimination of Internet and TCP/IP.

Interop 88 posts
https://www.garlic.com/~lynn/subnetwork.html#interop88

other GOSIP posts
https://www.garlic.com/~lynn/2017j.html#33 How DARPA, The Secretive Agency That Invented The Internet, Is Working To Reinvent It
https://www.garlic.com/~lynn/2017g.html#86 IBM Train Wreck Continues Ahead of Earnings
https://www.garlic.com/~lynn/2014b.html#47 Resistance to Java
https://www.garlic.com/~lynn/2011k.html#75 Somewhat off-topic: comp-arch.net cloned, possibly hacked
https://www.garlic.com/~lynn/2011b.html#17 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2010d.html#71 LPARs: More or Less?
https://www.garlic.com/~lynn/2009l.html#47 SNA: conflicting opinions
https://www.garlic.com/~lynn/2009l.html#3 VTAM security issue
https://www.garlic.com/~lynn/2006k.html#47 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2006k.html#45 Hey! Keep Your Hands Out Of My Abstraction Layer!
https://www.garlic.com/~lynn/2005e.html#39 xml-security vs. native security
https://www.garlic.com/~lynn/2002m.html#59 The next big things that weren't
https://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2001e.html#32 Blame it all on Microsoft
https://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
https://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/aadsm12.htm#23 10 choices that were critical to the Net's success

--
virtualization experience starting Jan1968, online at home since Mar1970

Gangsters of Capitalism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Gangsters of Capitalism
Date: 11 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022f.html#65 Gangsters of Capitalism
https://www.garlic.com/~lynn/2022f.html#66 Gangsters of Capitalism
and
https://www.garlic.com/~lynn/2022f.html#76 Why the Soviet computer failed

How Wall Street Created a Nation: J.P. Morgan, Teddy Roosevelt, and the Panama Canal
http://www.amazon.com/How-Wall-Street-Created-Nation-ebook/dp/B00MDW60IY/
loc3367-70:
After Panama, Cromwell spent the better part of his time in France. He received the Grand Cross of the Legion of Honor from that country for his work in resurrecting the Panama Canal. Meanwhile, his firm, Sullivan & Cromwell, flourished as one of Wall Street's preeminent law firms under the direction of his handpicked successors, John Foster Dulles and Arthur H. Dean. Cromwell died in 1948 at the age of ninety-four, leaving nineteen million dollars and no heirs.

... snip ...

Gangsters of Capitalism
https://www.amazon.com/Gangsters-Capitalism-Smedley-Breaking-Americas-ebook/dp/B092T8KT1N/
Smedley Butler was the most celebrated warfighter of his time. Bestselling books were written about him. Hollywood adored him. Wherever the flag went, "The Fighting Quaker" went--serving in nearly every major overseas conflict from the Spanish War of 1898 until the eve of World War II. From his first days as a 16-year-old recruit at the newly seized Guantanamo Bay, he blazed a path for empire: helping annex the Philippines and the land for the Panama Canal, leading troops in China (twice), and helping invade and occupy Nicaragua, Puerto Rico, Haiti, Mexico, and more. Yet in retirement, Butler turned into a warrior against war, imperialism, and big business, declaring: "I was a racketeer for capitalism."

... snip ...

Treason in America from Aaron Burr to Averell Harriman
https://www.amazon.com/Treason-America-Aaron-Averell-Harriman-ebook/dp/B005VHDR1G/
loc6565-69:
Cromwell's "French" company was paid $40 million by the United States. Colombia was paid nothing. Teddy Roosevelt refused ever to disclose just who it was who got the $40 million, though it was known that William Nelson Cromwell received a fee of at least $600,000. Cromwell later hired a young lawyer named John Foster Dulles into his firm; Dulles would become Sullivan and Cromwell's boss, and lead the firm into its position as chief legal representative for Adolf Hitler and the international cartels of the German Nazis.

... snip ...

John Foster Dulles played major role rebuilding Germany economy, industry, military from the 20s up through the early 40s
https://www.amazon.com/Brothers-Foster-Dulles-Allen-Secret-ebook/dp/B00BY5QX1K/
loc905-7:
Foster was stunned by his brother's suggestion that Sullivan & Cromwell quit Germany. Many of his clients with interests there, including not just banks but corporations like Standard Oil and General Electric, wished Sullivan & Cromwell to remain active regardless of political conditions.

loc938-40:
At least one other senior partner at Sullivan & Cromwell, Eustace Seligman, was equally disturbed. In October 1939, six weeks after the Nazi invasion of Poland, he took the extraordinary step of sending Foster a formal memorandum disavowing what his old friend was saying about Nazism

... snip ...

June1940, Germany had a victory celebration at the NYC Waldorf-Astoria with major industrialists. Lots of them were there to hear how to do business with the Nazis
https://www.amazon.com/Man-Called-Intrepid-Incredible-Narrative-ebook/dp/B00V9QVE5O/

somewhat replay of the Nazi celebration, after the war, 5000 industrialists and corporations from across the US had conference (also) at the Waldorf-Astoria, and in part because they had gotten such a bad reputation for the depression and supporting Nazis, as part of attempting to refurbish their horribly corrupt and venal image, they approved a major propaganda campaign to equate Capitalism with Christianity.
https://www.amazon.com/One-Nation-Under-God-Corporate-ebook/dp/B00PWX7R56/

part of the result by the 50s was adding "under god" to the pledge of allegiance (and the US motto, "In God We Trust"). slightly cleaned up version
https://en.wikipedia.org/wiki/Pledge_of_Allegiance
Even though the movement behind inserting "under God" into the pledge might have been initiated by a private religious fraternity and even though references to God appear in previous versions of the pledge, historian Kevin M. Kruse asserts that this movement was an effort by corporate America to instill in the minds of the people that capitalism and free enterprise were heavenly blessed. Kruse acknowledges the insertion of the phrase was influenced by the push-back against Russian atheistic communism during the Cold War, but argues the longer arc of history shows the conflation of Christianity and capitalism as a challenge to the New Deal played the larger role.[28]

... snip ...

... in the genra of "banana republics", is "Economic Hitman"
https://www.amazon.com/New-Confessions-Economic-Hit-Man-ebook/dp/B017MZ8EBM/
wiki entry
https://en.wikipedia.org/wiki/Confessions_of_an_Economic_Hit_Man A hit
man repents
https://www.theguardian.com/books/2006/jan/28/usa.politics
If the threats of the economic hit men don't persuade, the "jackals" will come in to make good on them. The jackals, says Perkins, are the CIA-sanctioned heavy mob who foment coups and revolutions, murder, abduction and assassination. And when the jackals fail, as was the case in Iraq, then the military goes in.

... snip ...

Economic hit men and jackals: The destructive tools of empire
https://www.dailysabah.com/columns/hatem-bazian/2018/08/28/economic-hit-men-and-jackals-the-destructive-tools-of-empire
If the economic hit man tools fail then the jackals are sent with support from local military personnel and selected elites who are more than happy and ready to take over the franchise and enrich themselves on the back of their own population.

... snip ...

aka Smedley
https://en.wikipedia.org/wiki/Smedley_Butler
and "War is a Racket"
https://en.wikipedia.org/wiki/War_Is_a_Racket
also had turned whistleblower
https://en.wikipedia.org/wiki/Business_Plot

and along the lines of "Economic Hit Man", The Profiteers: Bechtel and the Men Who Built the World"
https://www.amazon.com/Profiteers-Bechtel-Men-Built-World-ebook/dp/B010MHAHV2/

then there is "Was Harvard responsible for the rise of Putin" ... after the fall of the Soviet Union, those sent over to teach capitalism were more intent on looting the country (and the Russians needed a Russian to oppose US looting). John Helmer: Convicted Fraudster Jonathan Hay, Harvard's Man Who Wrecked Russia, Resurfaces in Ukraine
http://www.nakedcapitalism.com/2015/02/convicted-fraudster-jonathan-hay-harvards-man-who-wrecked-russia-resurfaces-in-ukraine.html
If you are unfamiliar with this fiasco, which was also the true proximate cause of Larry Summers' ouster from Harvard, you must read an extraordinary expose, How Harvard Lost Russia, from Institutional Investor. I am told copies of this article were stuffed in every Harvard faculty member's inbox the day Summers got a vote of no confidence and resigned shortly thereafter.

... snip ...

How Harvard lost Russia; The best and brightest of America's premier university came to Moscow in the 1990s to teach Russians how to be capitalists. This is the inside story of how their efforts led to scandal and disgrace (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20130211131020/http://www.institutionalinvestor.com/Article/1020662/How-Harvard-lost-Russia.html
Mostly, they hurt Russia and its hopes of establishing a lasting framework for a stable Western-style capitalism, as Summers himself acknowledged when he testified under oath in the U.S. lawsuit in Cambridge in 2002. "The project was of enormous value," said Summers, who by then had been installed as the president of Harvard. "Its cessation was damaging to Russian economic reform and to the U.S.-Russian relationship."

... snip ...

... US style capitalist kleptocracy has long history ... even predating banana republics. related posts/articles

History Is Un-American. Real Americans Create Their Own Futures
https://www.linkedin.com/pulse/history-un-american-real-americans-create-own-futures-lynn-wheeler/
Price Wars
https://www.linkedin.com/pulse/price-wars-lynn-wheeler/
Price Wars (Part II WARS)
https://www.linkedin.com/pulse/price-wars-part-ii-lynn-wheeler/
The Warning
https://www.linkedin.com/pulse/warning-lynn-wheeler/
Plutocracy Rising repost from Oct 2012
https://www.linkedin.com/pulse/plutocracy-rising-repost-from-oct-2012-lynn-wheeler/
Bad Ideas
https://www.linkedin.com/pulse/bad-ideas-lynn-wheeler/
Economists are arguing over how their profession messed up during the Great Recession. This is what happened
https://www.linkedin.com/pulse/economists-arguing-over-how-profession-messed-up-during-lynn-wheeler/
Michael Hudson's New Book: Wall Street Parasites Have Devoured Their Hosts
https://www.linkedin.com/pulse/michael-hudsons-new-book-wall-street-parasites-have-devoured-wheeler/

capitalism
https://www.garlic.com/~lynn/submisc.html#capitalism
racism posts
https://www.garlic.com/~lynn/submisc.html#racism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex

archived posts; History Is Un-American
https://www.garlic.com/~lynn/2023.html#10 History Is Un-American. Real Americans Create Their Own Futures
Price Wars
https://www.garlic.com/~lynn/2022e.html#106 Price Wars
https://www.garlic.com/~lynn/2022e.html#107 Price Wars
https://www.garlic.com/~lynn/2022f.html#21 Price Wars
The Warning
https://www.garlic.com/~lynn/2017j.html#12 The Warning
https://www.garlic.com/~lynn/2018.html#84 The Warning
Plutocracy Rising
https://www.garlic.com/~lynn/2017j.html#10 Plutocracy Rising repost from Oct 2012
https://www.garlic.com/~lynn/2017j.html#13 Plutocracy Rising repost from Oct 2012
Bad Ideas
https://www.garlic.com/~lynn/2017g.html#79 Bad Ideas
Economists are arguing
https://www.garlic.com/~lynn/2017d.html#67 Economists are arguing over how their profession messed up during the Great Recession. This is what happened
https://www.garlic.com/~lynn/2017d.html#69 Economists are arguing over how their profession messed up during the Great Recession. This is what happened
Wall Street Parasites
https://www.garlic.com/~lynn/2015g.html#65 Michael Hudson's New Book: Wall Street Parasites Have Devoured Their Hosts -- Your Retirement Plan and the U.S. Economy
https://www.garlic.com/~lynn/2015g.html#66 Michael Hudson's New Book: Wall Street Parasites Have Devoured Their Hosts -- Your Retirement Plan and the U.S. Economy

--
virtualization experience starting Jan1968, online at home since Mar1970

PROFS trivia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: PROFS trivia
Date: 11 Jan, 2023
Blog: Facebook
PROFS trivia ... the PROFS group was collecting internal apps and wrapping menus around them for (the mostly computer illiterate) users. They acquired a very early version of VMSG for the email client. When the VMSG author tried to offer them a much enhanced version, they tried to get him fired (folklore is that they had already taken credit for it). The whole thing quiets down with the VMSG author demonstrates that his initials are in every PROFS note (in non-displayed field). After that, the VMSG author only shares his source with me and one other person. While PROFS might delete client copies ... they were on the server's backup tapes. Later administrations became more savy about server backup copies.

Late 70s, Some of us at SJR in friday after work sessions had lots of discussions about what to do about the mostly computer illiterate employees, managers, executives, etc. This was at a time when 3720 terminals were part of the annual budget process and required VP sign-off (even after we showed the 3yr depreciation was about the same monthly "cost" as a desk phone). Then there was a rapidly spreading rumor that members of the corporate executive committee were communicating via email and result was large numbers of (computer illiterate) managers were pre-empting 3270 terminal deliveries for their desks. They would be powered on in the morning ... being unused all day while the PROFS menu was burned into the screen (and any management email was actually handled by staff). More than decade later there were still managers that rerouted large screen PS2/486s to their desks for 3270 terminal emulation (status symbol and fabricated appearance of computer illiteracy, spending the days unused with the same PROFS menu burned into the screen).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

archive posts mentioning PROFS & VMSG
https://www.garlic.com/~lynn/2022f.html#64 Trump received subpoena before FBI search of Mar-a-lago home
https://www.garlic.com/~lynn/2021h.html#50 PROFS
https://www.garlic.com/~lynn/2018.html#18 IBM Profs
https://www.garlic.com/~lynn/2011p.html#78 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011o.html#15 John R. Opel, RIP
https://www.garlic.com/~lynn/2007p.html#29 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2003j.html#56 Goodbye PROFS
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2002p.html#34 VSE (Was: Re: Refusal to change was Re: LE and COBOL)
https://www.garlic.com/~lynn/2002h.html#64 history of CMS

after 4300s were announced, but before shipped ... I got con'ed into doing benchmarks for national lab on engineer 4341 I was running in bldg15 (for dasd product test) ... they were looking at getting 70 for compute farm (sort of the leading edge of coming cluster supercomputing tsunami). small cluster of 4341s had much higher processing than 3033, significantly cheaper, as well as much less floor space, power, and cooling. At one point head of POK apparently felt so threaten that he convinces corporate that the allocation for critical 4341 manufacturing component should be cut in half. Then started to see large corporations making orders for hundreds of vm/4300s at a time for placing out in departmental areas (sort of the leading edge of the coming distributed computing tsunami, inside IBM, conference rooms were in short supply with so many turning into departmental vm370 rooms).

MVS then wanted to play in that market ... first the only new mid-range DASD for install in non-datacenter, was 3370 FBA (and MVS didn't have FBA support). Eventually CKD simulation was added to 3370 for MVS's benefit as "3375". However, it didn't do MVS much good, the distributed departmental market was looking for large tens of vm/4300s systems per support person ... while MVS still required multiple persons per system.

Trivia: I had been con'ed into helping Endicott with the VM microcode assist for 138/148 (precursor to, and assist also used in, 4331/4341) and then running around the world presenting details to business planners&forecaster. Endicott tried to convince corporate to allow pre-installed VM shipped in every machine (sort of like PR/SM-LPAR more than decade later).

some 360/370 microcode posts
https://www.garlic.com/~lynn/submain.html#360mcode

However, head of POK had only recently managed to convince corporate to kill the VM/370 product, shutdown the VM/370 development group and transfer all the people to POK (claiming otherwise MVS/XA wouldn't be able to ship on time). Endicott eventually manages to save the VM/370 product misson (but wasn't allowed to ship VM370 preinstalled on every machine) ... Endicott does have to recreate a VM370 development group from scratch (some amount of customer quality complaints during this period). Other trivia: POK wasn't going to tell the VM/370 people they had to move to POK until the very last minute, to minimize the number that might escape ... however it leaked and numerous escaped into the Boston area (this was about the infancy of DEC VMS and joke was head of POK was a major contributor to DEC VMS). There was then a witchhunt for the source of the leak, fortunately for me, nobody gave up the source. my tome on bureaucrats, careerists and MBAs destroying IBM and the Watson legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

playing disk engineer posts
https://www.garlic.com/~lynn/subtopic.html#disk
IBM demise of wild ducks, downturn, downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent archived posts mentioning supercomputer & distributed computing tsunami
https://www.garlic.com/~lynn/2023.html#1 IMS & DB2
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022h.html#48 smaller faster cheaper, computer history thru the lens of esthetics versus economics
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#8 IBM 4341
https://www.garlic.com/~lynn/2022f.html#92 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#90 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2022f.html#28 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022f.html#26 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#67 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022e.html#25 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#11 VM/370 Going Away
https://www.garlic.com/~lynn/2022d.html#101 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2022d.html#86 IBM Z16 - The Mainframe Is Dead, Long Live The Mainframe
https://www.garlic.com/~lynn/2022d.html#78 US Takes Supercomputer Top Spot With First True Exascale Machine
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022c.html#101 IBM 4300, VS1, VM370
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#19 Telum & z16
https://www.garlic.com/~lynn/2022c.html#18 IBM Left Behind
https://www.garlic.com/~lynn/2022c.html#9 Cloud Timesharing
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#102 370/158 Integrated Channel
https://www.garlic.com/~lynn/2022b.html#16 Channel I/O
https://www.garlic.com/~lynn/2022.html#124 TCP/IP and Mid-range market
https://www.garlic.com/~lynn/2022.html#59 370 Architecture Redbook
https://www.garlic.com/~lynn/2022.html#15 Mainframe I/O
https://www.garlic.com/~lynn/2021k.html#133 IBM Clone Controllers
https://www.garlic.com/~lynn/2021k.html#107 IBM Future System
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2021j.html#51 3380 DASD
https://www.garlic.com/~lynn/2021j.html#6 Pandora Papers: 'Biggest-Ever' Bombshell Leak Exposes Financial Secrets of the Super-Rich
https://www.garlic.com/~lynn/2021h.html#107 3277 graphics
https://www.garlic.com/~lynn/2021f.html#87 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021f.html#84 Mainframe mid-range computing market
https://www.garlic.com/~lynn/2021d.html#57 IBM 370
https://www.garlic.com/~lynn/2021c.html#67 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021c.html#63 Distributed Computing
https://www.garlic.com/~lynn/2021c.html#50 IBM CEO
https://www.garlic.com/~lynn/2021c.html#47 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021b.html#90 IBM Innovation
https://www.garlic.com/~lynn/2021b.html#55 In the 1970s, Email Was Special
https://www.garlic.com/~lynn/2021b.html#44 HA/CMP Marketing
https://www.garlic.com/~lynn/2021b.html#24 IBM Recruiting
https://www.garlic.com/~lynn/2021.html#76 4341 Benchmarks
https://www.garlic.com/~lynn/2021.html#53 Amdahl Computers
https://www.garlic.com/~lynn/2020.html#38 Early mainframe security
https://www.garlic.com/~lynn/2020.html#1 QE4 Started

--
virtualization experience starting Jan1968, online at home since Mar1970

Intel's Core i9-13900KS breaks the 6GHz barrier

From: Lynn Wheeler <lynn@garlic.com>
Subject: Intel's Core i9-13900KS breaks the 6GHz barrier
Date: 12 Jan, 2023
Blog: Facebook
Intel's Core i9-13900KS breaks the 6GHz barrier, launches today. Core i9-13900KS will be first CPU to reach 6GHz in stock form.
https://www.pcworld.com/article/1470602/intel-to-crack-6ghz-barrier-with-699-core-i9.html
Intel Unveils Core i9-13900KS: Raptor Lake Spreads Its Wings to 6.0 GHz
https://www.anandtech.com/show/18726/intel-unveils-core-i9-13900ks-raptor-lake-spreads-its-wings-to-6-0-ghz

recent archived posts about mainframe touting its chip ghz speeds:
https://www.garlic.com/~lynn/2022h.html#116 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022c.html#10 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021i.html#7 A brief overview of IBM's new 7 nm Telum mainframe CPU

some other recent Intel chip news

Intel's new laptop CPUs offer 24 cores and blistering 5.6GHz speeds. All of these chips will be available in laptops beginning later this quarter, Intel said at CES.
https://www.pcworld.com/article/1435904/intel-launches-13th-gen-mobile-core-chips-with-endurance-gaming.html
Intel's latest Core processors bring the 13th-gen to the masses. Is Intel's desktop Raptor Lake family complete?
https://www.pcworld.com/article/1440715/intel-announces-the-13th-gen-core-desktop-chips-youll-likely-buy.html
Intel Announces Non-K 13th Gen Core For Desktop: New 65 W and 35 W Processors
https://www.anandtech.com/show/18702/intel-announces-non-k-13th-gen-core-for-desktop-new-65-w-and-35-w-processors

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Change

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Change
Date: 12 Jan, 2023
Blog: Facebook
Reference to Learson trying to counter the bureaucrats and careerists (& MBAs) destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

Note: 23Jan1969 unbundling announcement, IBM started to charge for software, maint., SE services, etc. Previously part of SE training was sort of apprentice program as part of large group onsite at customer. After unbundling announce, IBM couldn't figure out how not to charge for trainee SEs onsite at the customer (large SE group constantly onsite at customer, disrupting IBM's traditional account control, also disrupted critical part of training of new generations of SEs). To address part of the SE training, HONE (hands-on network experience) was created with branch office online access to CP67 datacenters for practice running guest operating systems. When 370 initially announced, the HONE CP67 systems were enhanced with simulation for the new 370 instructions ... allowing branch office running 370 guest operating systems.

One of my hobbies after joining IBM was enhanced production systems for internal datacenters and HONE was long time customer. The cambridge science center had also ported APL\360 to CMS for CMS\APL ... fixing its memory management for large virtual memory workspaces (APL\360 traditionally was 16kbyte workspaces) for demand page environment and added APIs for system services (like file i/o, enabling lots of real world applications). HONE then started offering online APL-based sales&marketing tools ... which then came to dominate all HONE activity ... and SE training with guest operating systems just dwindled away (with SE skill level dropping ... increasingly become phone directory for internal IBM technical contacts). some discussion in controlling market post
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/

23jun69 unbundling announce
https://www.garlic.com/~lynn/submain.html#unbundle
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Change

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Change
Date: 13 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#20 IBM Change

Previous was various, long term, systemic, strategic ... nearer (to the 13 "baby blues" reorg in preparation to breaking up the company), tactical I've frequently posted is about the communication group's stranglehold on mainframe datacenters. In this recent (Facebook) post/thread on IBM Profs, I have reply about vm/4300s leading edge to the coming distributed computing tsunami
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia

3270 dumb terminal emulation helped give it early market penetration ... but as distributed computing (and client/server) evolved, communication group strategic ownership of everything that crossed datacenter walls and fiercely fighting off distributed computing and client/server gave them death grip on mainframe datacenters.

Late 80s, senior disk engineer got talk scheduled at internal, world-wide, annual communication group conference supposedly on 3174 performance, but opened the talked with statement that the communication group would be responsible for the demise of the disk division. Scenario was that communication group had stranglehold on mainframe datacenters with their corporate strategic ownership of everything that crossed datacenter walls (and were fiercely fighting off distributed computing and client/server). The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms, with drop in disk sales. The disk division had come up with a number of solutions, but they were constantly being vetoed by the communication group.

GPD/Adstar VP partial countermeasure (to communication group death grip) was investing in distributed computing startups that would use IBM disks. He would have us in periodically to discuss his investments and asked us if we could drop by and provide any assistance. One was "MESA Archival" in Boulder ... spinoff startup of NCAR
https://en.wikipedia.org/wiki/National_Center_for_Atmospheric_Research
https://ncar.ucar.edu/

another communication group death grip story about severely performance kneecapping microchannel cards as part of their fierce battle fighting off distributed computing and client/server; AWD (workstation division) had PC/RT with AT-bus and had done their own 4mbit token-ring card. Then for the RS/6000 with microchannel, were told they could only use PS2 microchannel cards. One of the examples was the PS2 16mbit token-ring had lower card throughput than the PC/RT 4mbit token-ring card (i.e. a PC/RT 4mbit token-ring server would have higher throughput than RS/6000 16mbit token-ring server). Joke was that RS/6000 limited to the severely performance kneecapped PS2 microchannel cards wouldn't have better throughput than PS2/486 (token-ring, display, disks, etc).

some more on the subject
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/

dumb terminal emulation posts
https://www.garlic.com/~lynn/subnetwork.html#terminal gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 13 Jan, 2023
Blog: Facebook
I took two credit hour intro to fortran/computers and at the end of the semester was hired as student programmer to re-implement 1401 MPIO on 360/30. The univ had 709/1401 (709 tape->tape with 1401 was unit record front end). They had been sold a 360/67 for TSS/360 replacing the 709/1401 ... temporarily pending arrival 360/67, the 1401 was replaced with 360/30. The 360/30 had 1401 emulation and they could have continued to run 1401 MPIO (tape->printer/punch & card reader->tape), but I guess they wanted to get 360 experience. I was given a whole bunch of manuals and got to design & implement my own monitor, interrupt handlers, device drivers, error recovery, storage management, etc). The univ. shutdown the datacenter on weekends and I would have the whole place to myself (although 48hrs w/o sleep made monday morning classes hard). After a few weeks I had a 2000 card 360 assembler program with assembler option for running stand alone or under os/360 mode. The stand-alone version took 30mins to assemble ... the os/360 version took 60mins (each DCB macro took 5-6mins to assemble).

I quickly learned to read hex punch holes; fan the assemble output TXT deck looking for the hex displacement card needing patch ... would duplicate card in 029 out to patch area and then multi-punch hex patch into the new card. I would usually do that in much less time than it took to do a new assembly. Within a year of taking intro class, the 360/67 arrived and I was hired fulltime responsible for OS/360 (tss/360 never came to production, so ran as 360/65).

Student fortran jobs took less than second with 709 IBSYS tape->tape; initially OS/360 took over a minute for student fortran job. I installed HASP, cutting the time in half. I then started reordering cards in STAGE2 SYSGEN deck to carefully place files and PDS members to optimize disk arm seek and PDS directory lookup multi-track search, cutting another 2/3rds to 12.9secs. Student fortran jobs never got better than 709 <1sec until I installed Univ of Waterloo's WATFOR.

trivia1: PTF "fixes" replacing system PDS members would start to destroy my careful ordering and begin to result in performance degradation. Sometimes there were so many "fixes", I would be forced to do an interim system generation to restore order and performance.

trivia2: JCL was going through evolution ... had to do a lot of testing for every new OS/360 release, there was always some administration production programs where the JCL would "break".

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with formation of Boeing Computer Services (consolidate a couple hundred million in dataprocessing into a independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thought Renton datacenter was possibly largest in the world, 360/65s were arriving faster than they could be installed, boxes constantly being staged in the hallways around the machine room. Lot of politics between director of Renton and the CFO ... who only had a 360/30 up at Boeing Field for payroll ... although they enlarge the machine room to install a 360/67 for me to play with when I'm not doing other stuff.

In the early 80s, I'm introduced to John Boyd and would sponsor his briefings at IBM. One of his stories was being very vocal about the electronics across the trail wouldn't work. Possibly as punishment he is put in command of "spook base" (about the same time I'm at Boeing) ... his biographies say "spook base" was $2.5B "windfall" for IBM (ten times Boeing).

Boyd posts & web URLs:
https://www.garlic.com/~lynn/subboyd.html

archived posts mentioning MPIO, WATFOR, and BCS:
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#11 360 Powerup
https://www.garlic.com/~lynn/2022e.html#95 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#35 Dataprocessing Career
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021j.html#64 addressing and protection, was Paper about ISO C
https://www.garlic.com/~lynn/2021j.html#63 IBM 360s
https://www.garlic.com/~lynn/2021f.html#20 1401 MPIO
https://www.garlic.com/~lynn/2021d.html#25 Field Support and PSRs
https://www.garlic.com/~lynn/2021.html#41 CADAM & Catia
https://www.garlic.com/~lynn/2020.html#32 IBM TSS
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2018f.html#51 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2010l.html#61 Mainframe Slang terms
https://www.garlic.com/~lynn/2010b.html#61 Source code for s/360 [PUBLIC]
https://www.garlic.com/~lynn/2009o.html#37 Young Developers Get Old Mainframers' Jobs
https://www.garlic.com/~lynn/99.html#130 early hardware

--
virtualization experience starting Jan1968, online at home since Mar1970

Health Care in Crisis: Warning! US Capitalism is Lethal

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Health Care in Crisis: Warning! US Capitalism is Lethal
Date: 13 Jan, 2023
Blog: Facebook
Health Care in Crisis: Warning! US Capitalism is Lethal
https://www.counterpunch.org/2023/01/13/health-care-in-crisis-warning-us-capitalism-is-lethal/
Private equity changes workforce stability in physician-owned medical practices
https://www.eurekalert.org/news-releases/975889

other recent private equity health care, hospital, medical practices posts
https://www.garlic.com/~lynn/2023.html#8 Ponzi Hospitals and Counterfeit Capitalism
https://www.garlic.com/~lynn/2022h.html#106 US health agency accused of bowing to drug industry with new opioid guidance
https://www.garlic.com/~lynn/2022.html#50 Science Fiction is a Luddite Literature
https://www.garlic.com/~lynn/2021k.html#82 Is Private Equity Overrated?
https://www.garlic.com/~lynn/2021h.html#20 Hospitals Face A Shortage Of Nurses As COVID Cases Soar
https://www.garlic.com/~lynn/2021g.html#64 Private Equity Now Buying Up Primary Care Practices
https://www.garlic.com/~lynn/2021g.html#58 The Storm Is Upon Us
https://www.garlic.com/~lynn/2021g.html#40 Why do people hate universal health care? It turns out -- they don't
https://www.garlic.com/~lynn/2021f.html#7 The Rise of Private Equity
https://www.garlic.com/~lynn/2021e.html#94 Drug Industry Money Quietly Backs Media Voices Against Sharing Vaccine Patents
https://www.garlic.com/~lynn/2021e.html#69 13 Facts About American Prisons That Will Blow Your Mind
https://www.garlic.com/~lynn/2021e.html#48 'Our Lives Don't Matter.' India's Female Community Health Workers Say the Government Is Failing to Protect Them From COVID-19

private equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
regulartory capture posts
https://www.garlic.com/~lynn/submisc.html#regulatory.capture
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 13 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards

Some of the MIT CTSS/IBM7094 people went to the 5th flr to do multics. Others went to the IBM science center on the 4th flr and did (cp40/cms & cp67/cms ... precursors to vm370/cms) virtual machines, the internal network (technology in the 80s also used for the corporate sponsored univ BITNET), performance tools, etc. Various pieces of CTSS was duplicated for CMS, including CTSS RUNOFF as SCRIPT. Then in 1969, GML was invented at the science center (name chose because first letters are the same as inventors last name) and GML tag processing was added to CMS SCRIPT (after another decade it morphs into ISO standard SGML, and after another decade morphs into HTML at CERN).

One of the first mainstream IBM documents done in SCRIPT was the 370 Architecture Manual (called "red book" for distribution in 3-ring red binder). Using CMS script options, either the full architecture manual could be printed (with engineering considerations, justifications, alternatives) or the 370 Principle of Operations subset. Usually can tell the 370 script versions because they look like they have been printed on 1403/3211 printer ... here is a more recent version from the 80s:.
https://bitsavers.org/pdf/ibm/370/princOps/SA22-7085-0_370-XA_Principles_of_Operation_Mar83.pdf

trivia: 1st webserver in the US is on the stanford SLAC (CERN sister institution) VM system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
SGML history
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm
CTSS RUNOFF ref (also has some CMS script history and mentions doing version for PC in 1985)
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF
trivia: in the late 70s, an IBM SE in LA had done script (NewScript and Allwrite) for TRS80

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
SGML posts
https://www.garlic.com/~lynn/submain.html#sgml

some archive posts mentioning newscript/allwrite
https://www.garlic.com/~lynn/2022c.html#99 IBM Bookmaster, GML, SGML, HTML
https://www.garlic.com/~lynn/2018b.html#94 Old word processors
https://www.garlic.com/~lynn/2012l.html#24 "execs" or "scripts"
https://www.garlic.com/~lynn/2011n.html#58 "Geek" t-shirts
https://www.garlic.com/~lynn/2004l.html#74 Specifying all biz rules in relational data

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 13 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards

re: manual index sort order; this claims it was Learson's fault ... 360 was suppose to be an ASCII machine ... but the unit record gear wasn't ready ... so they temporarily shipped as EBCDIC machine with plans to convert later ... however, it didn't quite work out that way. website gone 404, but lives on at wayback machine
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
other refs
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/BACSLASH.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM
and
https://en.wikipedia.org/wiki/Bob_Bemer
https://history.computer.org/pioneers/bemer.html

topic drift about Learson trying to fight the bureaucrats, careerists, and MBAs destroying the Watson legacy at IBM
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
archived Boyd posts
https://www.garlic.com/~lynn/subboyd.html

recent archived posts mentioning Bob Beemer
https://www.garlic.com/~lynn/2022h.html#100 IBM 360
https://www.garlic.com/~lynn/2022h.html#65 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2022h.html#63 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2022d.html#24 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022c.html#116 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#56 ASCI White
https://www.garlic.com/~lynn/2022c.html#51 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022b.html#91 Computer BUNCH
https://www.garlic.com/~lynn/2022b.html#58 Interdata Computers
https://www.garlic.com/~lynn/2022b.html#13 360 Performance
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 13 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards

for the fun of it, other topic drift ... early 80s, before REX was renamed REXX and released to customers, I wanted to show it wasn't just another pretty scripting language. I chose to redo a very large assembler-implemented application, system problem analysis and dump reader ... objective was to reimplement it in REX within three months working half time, with ten times the function *AND* ten times the performance (some slight of hand for interpreted language); turns out I finished early, so wrote a library of automated functions that searched for common failure signatures. I also included a softcopy of IBM's messages & code manual and would provide various kinds of search features.

I thot it would replace the current app shipped to customers ... but for whatever reason it didn't, even tho it was in use by nearly every internal datacenter and PSR. I eventually get permission to do presentations on how I did the implementation at various customer user group meetings, and after a few months, similar implementations started to appear.

Old email refs to 3090 service processor (3092) wanting to include it with the 3092:
https://www.garlic.com/~lynn/2010e.html#email861031
https://www.garlic.com/~lynn/2010e.html#email861223

archived dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 14 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards

some people from science center came out and installed CP67 (virtual machines, precursor to VM370) at the univ (3rd installation after cambridge and lincol labs) ... I mostly ran it on my weekends dedicated window and rewrote lots of the cp67 code. CP67 had 1052 & 2741 with some automagic code that determined terminal type and used the SAD CCW to dynamically assign the correct terminal line scanner to each port. The univ had a bunch of TTY/ASCII terminals and I added ASCII support ... integrated in with dynamically determining terminal type (triva: when the ASCII/TTY hardware arrived to install in the 360 terminal controller, it came in a box from Heathkit). I then wanted to use a single dialup phone number for all terminal types
https://en.wikipedia.org/wiki/Line_hunting

... however it didn't quite work ... IBM had taken shortcut and had hardwired linespeed to each port. Thus was kicked off univ. program to do clone (terminal) controller, built a channel interface card for a Interdata/3 programmed to emulate IBM controller with the addition that it did dynamic line speed recognition ... four of us get written up for (some part of) IBM clone controller market. Later it was enhanced with an Interdate/4 for channel interface and cluster of Interdata/3s for port interfaces. Interdata and then Perkin-Elmer sold them as IBM clone controller.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

IBM clone controller posts
https://www.garlic.com/~lynn/submain.html#360pcm

Note claims are that the 70s IBM "Future System" project (totally different than 370 and would totally replace it) motivation was to be so complex that clone makers wouldn't be able to keep up ... but then it could be said that it was so complex that IBM couldn't do it either (and the massive IBM effort imploded). Mentioned here as part of the destruction to Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
also future system refs:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
other refs about long term effects
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/

Note that during FS, internal politics were shutting down 370 efforts ... and the claim is the lack of new 370 products gave the clone 370 system makers their market foothold (i.e. from the law of unintended consequences, in trying to address clone controller market, they enabled the clone system market)

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 14 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#27 IBM Punch Cards

Note SMART trivia: Dick left IBM and worked as consultant. Mainstream IBM (like hudson valley) had huge problems with the 23jun1969 unbundling that started to charge for software (and other stuff). Business rules were that revenue had to cover initial development plus ongoing enhancements and maintenance; forecasts typical did low, middle, & high price (with number of predicted licenses to see if it could meet business rules and whether there was any price elasticity) ... mainstream IBM would periodically find that there was NO price point that would meet business rules. One of the first gimmicks was JES2 NJE, it was announced as combined product with VM370 VNET/RSCS with nearly all VNET revenue subsidizing NJE. The business rules were then relaxed so that they just applied to an organization as a whole. MVS ISPF had similar problem as JES2 NJE ... so VM370 Performance Products (including SMART) was moved into the ISPF organization (at the time vm370 performance products had approx same total revenue as ISPF). They cut the VM370 performance product group to three people so nearly all of the VM370 revenue could subsidize the nearly 200 people (at the time) in the ISPF operation.

Post about co-worker at IBM responsible for the internal network,
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
technology also used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET

Internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
BITNET posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle

some past posts mentioning VM370 performance products subsidizing ISPF
https://www.garlic.com/~lynn/2022e.html#63 IBM Software Charging Rules
https://www.garlic.com/~lynn/2022c.html#45 IBM deliberately misclassified mainframe sales to enrich execs, lawsuit claims
https://www.garlic.com/~lynn/2021k.html#89 IBM PROFs
https://www.garlic.com/~lynn/2018d.html#49 What microprocessor is more powerful, the Z80 or 6502?
https://www.garlic.com/~lynn/2017i.html#23 progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017g.html#34 Programmers Who Use Spaces Paid More
https://www.garlic.com/~lynn/2017e.html#25 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017c.html#2 ISPF (was Fujitsu Mainframe Vs IBM mainframe)
https://www.garlic.com/~lynn/2014h.html#103 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014f.html#89 Real Programmers
https://www.garlic.com/~lynn/2013i.html#36 The Subroutine Call
https://www.garlic.com/~lynn/2013e.html#84 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#64 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2011p.html#106 SPF in 1978
https://www.garlic.com/~lynn/2010m.html#84 Set numbers off permanently
https://www.garlic.com/~lynn/2010g.html#50 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2010g.html#6 Call for XEDIT freaks, submit ISPF requirements
https://www.garlic.com/~lynn/2009s.html#46 DEC-10 SOS Editor Intra-Line Editing

--
virtualization experience starting Jan1968, online at home since Mar1970

Medicare Begins to Rein In Drug Costs for Older Americans

From: Lynn Wheeler <lynn@garlic.com>
Subject: Medicare Begins to Rein In Drug Costs for Older Americans
Date: 14 Jan, 2023
Blog: Facebook
Medicare Begins to Rein In Drug Costs for Older Americans. Reforms embedded in the Inflation Reduction Act will bring savings to seniors this year. Already some lawmakers are aiming to repeal the changes.
https://www.nytimes.com/2023/01/14/health/medicare-drug-prices.html

2002, congress lets the fiscal responsibility act lapse (spending couldn't exceed revenue, on its way to eliminating all federal dept). 2010 CBO report, 2003-2009, spending increased by $6T and tax revenue decreased by $6T for $12T difference from a fiscal responsibility budget (first time taxes was reduced to not pay for two wars, sort of confluence of Federal Reserve and Too Big To Fail needed huge federal debt, special interests wanted huge tax cut, and Military-Industrial Complex wanted huge spending increase and perpetual wars).

The first major bill after fiscal responsibility act lapsed was Medicare Part-D. CBS 60mins had segment on 18 Republicans shepherding "part-d" through, just before final vote they adding one line change precluded competitive bidding and prevented CBO from distributed report about the change. Also found within 6-12 months of the vote, all 18 had resigned and were on drug industry payrolls (also showed prices of part-d drugs that were three times prices of identical drugs subject to competitive bidding). Comptroller General starting including in speeches that part-D was enormous gift to the pharmaceutical industry and would come to be a $40T unfunded mandate that would swamp all other budget items.

Medicare Part-d posts
https://www.garlic.com/~lynn/submisc.html#medicare.part-d
Fiscal Responsibiliity Act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
Comptroller General posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
Military-Indusrial Complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
Perpetual War posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
Too Big To Fail (Too Big To Prosecute, Too Big To Jail)
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Change

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Change
Date: 15 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#20 IBM Change
https://www.garlic.com/~lynn/2023.html#21 IBM Change

also reference to Learson trying to counter the bureaucrats and careerists (& MBAs) destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
and then
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/
also
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

... snip ...

When Boca claimed that they weren't doing software for ACORN (unannounced IBM/PC) there was IBM group in silicon valley formed to do PC software ... every month group checked that Boca wasn't doing software. Then all of sudden, if you wanted to do software for ACORN you had to move to Boca. Boca wanted to not have internal IBM operations competing with them on ACORN (internal politics) ... even to outsourcing with external organizaton where Boca controlled the contract interface.

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on cp67/cms at npg (gone 404, but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

trivia: CP67/CMS was precursor to personal computing; Some of the MIT 7094/CTSS people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to the 5th flr, project mac, and MULTICS.
https://en.wikipedia.org/wiki/Multics

Others went to the 4th flr, IBM Cambridge Science Center, did virtual machine CP40/CMS (on 360/40 with hardware mods for virtual memory, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, precursor to vm370), online and performance apps, CTSS RUNOFF redid for CMS as SCRIPT, GML invented at science center in 1969 (and GML tag processing added to SCRIPT, a decade later GML morphs into ISO SGML and after another decade morphs into HTML at CERN), networking, etc.
https://en.wikipedia.org/wiki/CP/CMS
https://en.wikipedia.org/wiki/Conversational_Monitor_System

as in my oft repeated tale about communication group was going to be responsible for the demise of the (GPD/Adstar) disk division ... the disk division wanted to adapt IBM backend computers to changing/evolving user interface ... but the communication group was trying keep their dumb terminal paradigm cast in concrete, fiercely fighting off client/server and distributed computing, (with their corporate strategic ownership of everything that crossed the datacent er walls) veto'ing all changes that disk division tried to make.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
script, gml, sgml, html, etc. posts
https://www.garlic.com/~lynn/submain.html#sgml
communication group dumb terminal paradigm posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

past archived posts mentioning Opel, DOS, Kildall, CP/M
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#90 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2021k.html#22 MS/DOS for IBM/PC
https://www.garlic.com/~lynn/2019e.html#136 Half an operating system: The triumph and tragedy of OS/2
https://www.garlic.com/~lynn/2019d.html#71 Decline of IBM
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2012.html#100 The PC industry is heading for collapse

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Change

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Change
Date: 15 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#20 IBM Change
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2023.html#30 IBM Change

Starting in early 80s, had HSDT project, T1 (1.5mbits/sec) and faster computer links; and was suppose to get $20M from NSF director to interconnect the NSF supercomputer sites. Then congress cuts the budget, some other things happen and finally an RFP is released (based in part on what we already had running). Preliminary announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

Lots of battles with communication group since their mainframe controllers only supported up to 56kbit/sec links, they were constantly spreading misinformation and FUD trying to preserve their position. In mid-80s, they even prepared a report for the corporate executive committee that IBM customers wouldn't be wanting T1 until sometime well into the 90s. 37x5 supported "fat pipes", multiple parallel 56kbit links treated as single link. They surveyed customers and found nobody with more than six 56kbit links in "fat pipe". What they didn't know (or avoided explaining) was typical telco tariff for T1 was about the same as five or six 56kbit links. In trivial survey, we found 200 IBM mainframe customers that would just move to full T1, supported by non-IBM mainframe controller.

trivia1: the first webserver in the US was Stanford SLAC (CERN sister installation) on their VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

trivia2: OASC (28mar1986 preliminary announce) funding for software went to NCSA
https://beta.nsf.gov/news/mosaic-launches-internet-revolution
developed MOSAIC (browser). Some of the people left NCSA and did MOSAIC startup (name changed to NETSCAPE when NCSA complained about the use of "MOSAIC").

Last product we did at IBM was HA/CMP ... started out HA/6000 for NYTimes to migrate their newspaper system (ATEX) off VAXCluster to RS/6000; I rename it HA/CMP (High Availability Cluster Multi-Processing) when I start doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (Ingres, Informix, Sybase, and Oracle who have Vaxcluster support in common source base with their other products and I do API with vaxcluster semantics to simplify the port). Old archive post with reference to Jan92 cluster scale-up meeting with Oracle CEO (16processor by mid92, 128processor by ye92)
https://www.garlic.com/~lynn/95.html#13

Within a few weeks of the Ellison meeting, cluster scale-up is transferred, announced as IBM supercomputer and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later. Later we are brought in as consultants with small client/server startup. Two of the former Oracle people (we had worked with on cluster scale-up) were there responsible for something called "commerce server" and wanted to do payment transactions on the server. The startup had also invented this technology they called SSL they wanted to use, it is now frequently called "electronic commerce". I had absolute authority for everything between the webservers and the financial industry payment networks. I then created "Why Internet Isn't Business Critical Dataprocessing" talk based on the work I had to do for electronic commerce. I was also doing some work with Jon Postel (Internet RFC editor)
https://en.wikipedia.org/wiki/Jon_Postel
and he sponsored my talk at ISI & USC.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

a couple other internet related posts
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Change

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Change
Date: 16 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#20 IBM Change
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2023.html#31 IBM Change

another communication group death grip story about severely performance kneecapping microchannel cards as part of their fierce battle fighting off distributed computing and client/server; AWD (workstation division) had PC/RT with AT-bus and had done their own 4mbit token-ring card. Then for the RS/6000 with microchannel, were told they could only use PS2 microchannel cards. One of the examples was the PS2 16mbit token-ring had lower card throughput than the PC/RT 4mbit token-ring card (i.e. a PC/RT 4mbit token-ring server would have higher throughput than RS/6000 16mbit token-ring server, 16mbit t/r microchannel cards design point w/300+ stations doing terminal emulation on single network; aka NOT client/sever). Joke was that RS/6000 limited to the severely performance kneecapped PS2 microchannel cards wouldn't have better throughput than PS2/486 (token-ring, display, disks, etc).

communication group tryning to preserve their dumb terminal paradigm/emulation
https://www.garlic.com/~lynn/subnetwork.html#terminal
AWD, 801/risc, iliad, romp, rios, pc/rt, rs6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 16 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#27 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#28 IBM Punch Cards

Late 70s, I had been brought into Safeway Oakland datacenter running large loosely-coupled complex serving all the US stores (before the LBO and selling off blocks of stores) ... that was running into horrible throughput problems (apparently all the company and POK performance experts had already been brought through). I was brought into large classroom with piles of SVS system activity reports covering the tables. After about 30mins I started to recognize that a specific shared disk was max'ing around 7 I/Os (aggregate of all activity across all activity reports, during peak worst performance) and asked what it was. It was the shared PDS store controller application library ... with 3cyl PDS directory. Turns out PDS directory search was taking avg of 1.5 3330 cyls, one 19track multi-track search at 60rev/sec or .317sec plus 2nd multi-track search of 9.5 tracks or .158sec for avg of .475sec; followed by seek/search/read the actual (store controller) PDS member application. 3330 was capable of only loading two store controller applications per second for all stores in the country. Solution was to split the store controller application PDS dataset into multiple datasets across multiple disks and creating a unique set of non-shared dedicated store controller dataset DASD for each system.

I've claimed that CKD was a 360 60s tradeoff for limited real storage (for cached disk information) and abundant I/O resources (vtoc, pds directories, etc. & doing multi-track searches) ... trade-off that started to invert by the mid-70s. I've frequently told story before in 60s, as undergraduate, univ hired me fulltime responsible for OS/360. I learned to carefully redo sysgen for dataset placement and pds member order to optimize avg. arm seek and PDS directory multi-track search ... so was already familiar with the problems.

Trivia: earlier I had transferred to IBM San Jose Research and was allowed to wander around IBM and customer datacenters. Disk engineering (blg14) and disk product test (blg15) across the street were running stand-alone, 7x24, prescheduled testing. They had mentioned that they had tried running with MVS, but found it had 15min mean-time-between failure (MTBF) in that environment, requiring manual re-ipl. I offered to rewrite I/O supervisor making it bullet proof and never fail ... with them to do any amount of on-demand, concurrent testing (greatly improving productivity). Downside was they got in habit of trying to blame my systems for problem and I had to spend increasing amount of time playing disk hardware engineer. I made the mistake of writing a research report about the work and happen to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head (offline told that they tried to have me separated from the IBM company, when that didn't work, they tried to make things unpleasant in other ways).

other trivia: I was increasingly pontificating on mainframe disk performance issues, in early 80s, wrote a tome that between os/360 announce and the early 80s, disk relative system performance had declined by order of magnitude (disks got 3-5 times faster while systems got 40-50 times faster). Some GPD disk division executive took exception to what i wrote and assigned the division performance organization to refute my claims. After a couple weeks they basically came back and I had slightly understated the problem. They then respun the analysis for optimizing DASD configuration for improved system throughput (SHARE 63, B874, 16Aug1984).

Also late 70s, I was told by the MVS DASD group that even if I provided fully functional and tested FBA support (aka 3370 FBA), I still needed a $26M incremental business case ($200M-$300M additional sales) to cover MVS training and documentation changes ... and by-the-way, IBM was selling every disk it was making, so any change from CKD to FBA would just be the same amount of revenue (*AND* as part of justification, I wasn't able to use life-time simplification and savings from changing from CKD to FBA architecture)

Note *ALL* disk hardware was beginning to migrate to FBA ... even 3380 CKD (can be seen in records/track formulas where record size had to be rounded up to 3380 "cell size"). No real CKD disks have been made for decades, all being simulated on industry standard fixed-block disks.

playing disk engineer archived posts
https://www.garlic.com/~lynn/subtopic.html#disk
FBA, CKD, multi-track search posts
https://www.garlic.com/~lynn/submain.html#dasd

other recent related posts
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/

some recent posts mentioning the PDS directory multi-track search problem at safeway
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2022e.html#48 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2022c.html#70 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022.html#72 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2021k.html#108 IBM Disks
https://www.garlic.com/~lynn/2021j.html#105 IBM CKD DASD and multi-track search
https://www.garlic.com/~lynn/2019b.html#15 Tandem Memo

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Punch Cards

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Punch Cards
Date: 16 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#26 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#27 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#28 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards

With decision to make all 370s virtual memory, some of the science center people moved to the 3rd flr (taking over the IBM Boston Programming Center) to form the VM370 development group. When they outgrew the 3rd flr, they moved out to the empty (former IBM SBC) bldg in Burlington Mall. Then in the wake of the Future System implosion disaster there was mad rush to get stuff back into the 370 product pipelines and the quick&dirty 3033&3081 efforts were kicked off in parallel. The head of POK also managed to convince corporate to kill VM370 product and move all the people to POK (or otherwise MVS/XA would ship on time). They were planning on not telling the VM370 people until the very last minute to minimize the number that might escape. The move managed to leak early and some number escape into the Boston (this was when DEC was starting VMS development and joke was head of POK was major contributor to DEC VMS). Then there was a witchhunt for source of the leak, fortunattely for me, nobody gave up the source. Note: Endicott managed to save the VM370 product mission, but had to reconstitute a development from scratch (some amount of customer complaints about code quality during this period).

Trivia: a decade+ ago, I was asked to track down the original decision to make all 370s virtual memory. Basically MVT storage management was so bad that regions had to be specified four times larger than used. As a result, a typical 1mbyte 370/165 could only run four concurrent regions ... insufficient throughput to keep system busy and justified. MVT moving to SVS 16mbyte virtual memory (very similar to running MVT in CP67 16mbyte virtual machine), would allow increasing number of regions by factor of four with little or no paging. Pieces of all archived exchanged email (with staff member for the executive making the virtual memory decision).
https://www.garlic.com/~lynn/2011d.html#73

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
future system failure posts
https://www.garlic.com/~lynn/submain.html#futuresys

recent posts about making all 370s virtual memory
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2022h.html#115 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022h.html#32 do some Americans write their 1's in this way ?
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#110 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#83 COBOL and tricks
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#41 MVS
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

Revealed: Exxon Made "Breathtakingly" Accurate Climate Predictions in 1970's and 80's

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Revealed: Exxon Made "Breathtakingly" Accurate Climate Predictions in 1970's and 80's
Date: 16 Jan, 2023
Blog: Facebook
Revealed: Exxon Made "Breathtakingly" Accurate Climate Predictions in 1970's and 80's. Oil company drove some of the leading science of the era only to publicly dismiss global warming.
https://www.motherjones.com/environment/2023/01/revealed-exxon-made-breathtakingly-accurate-climate-predictions-in-1970s-and-80s/
The oil giant Exxon privately "predicted global warming correctly and skilfully" only to then spend decades publicly rubbishing such science in order to protect its core business, new research has found.

... snip ...

merchants of doubt posts
https://www.garlic.com/~lynn/submisc.html#merchants.of.doubt

some recent posts mentioning Big Oil and climate
https://www.garlic.com/~lynn/2022g.html#89 Five fundamental reasons for high oil volatility
https://www.garlic.com/~lynn/2022g.html#21 'Wildfire of disinformation': how Chevron exploits a news desert
https://www.garlic.com/~lynn/2022f.html#16 The audacious PR plot that seeded doubt about climate change
https://www.garlic.com/~lynn/2022c.html#117 Documentary Explores How Big Oil Stalled Climate Action for Decades
https://www.garlic.com/~lynn/2021i.html#28 Big oil's 'wokewashing' is the new climate science denialism
https://www.garlic.com/~lynn/2021h.html#2 The Disturbing Rise of the Corporate Mercenaries
https://www.garlic.com/~lynn/2021g.html#72 It's Time to Call Out Big Oil for What It Really Is
https://www.garlic.com/~lynn/2021g.html#16 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021g.html#13 NYT Ignores Two-Year House Arrest of Lawyer Who Took on Big Oil
https://www.garlic.com/~lynn/2021g.html#3 Big oil and gas kept a dirty secret for decades
https://www.garlic.com/~lynn/2021f.html#25 POGO Testimony on Holding the Oil and Gas Industry Accountable
https://www.garlic.com/~lynn/2021f.html#15 POGO Testimony on Holding the Oil and Gas Industry Accountable
https://www.garlic.com/~lynn/2021e.html#89 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2021e.html#77 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2021e.html#59 How climate change skepticism held a government captive
https://www.garlic.com/~lynn/2019e.html#123 'Deep, Dark Conspiracy Theories' Hound Some Civil Servants In Trump Era
https://www.garlic.com/~lynn/2018f.html#30 Scientists Just Laid Out Paths to Solve Climate Change. We Aren't on Track to Do Any of Them
https://www.garlic.com/~lynn/2018d.html#112 NASA chief says he changed mind about climate change because he 'read a lot'
https://www.garlic.com/~lynn/2018b.html#114 Chevron's lawyer, speaking for major oil companies, says climate change is real and it's your fault
https://www.garlic.com/~lynn/2017i.html#13 Merchants of Doubt
https://www.garlic.com/~lynn/2017b.html#5 Trump to sign cyber security order

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM changes between 1968 and 1989

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM changes between 1968 and 1989
Date: 16 Jan, 2023
Blog: Facebook
Question about IBM changes between 1968 and 1989, my often referenced account ... starting with Learson trying to prevent the bureaucrats, careerists, and MBAs destroying Watson's legacy.
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

as I periodically pointed out, in 89/90 commandant of marine corps leverage John Boyd for make over of the corps at a time when IBM was desperately in need of make over. Shortly IBM has one of the largest losses in history of US companies and is being reorged into the 13 "baby blues" in preparation for breaking up the company.

Especially during the Future System period in the 70s, IBM sales became infamous for FUD (fear, uncertainty, doubt) marketing. Major FS motivation was so complex that the clone controller ("PCM") makers wouldn't be able to keep up. FS was completely different from 370 and was going to completely replace 370. Internal politics during the FS period was shutting down the 370 activities and lack of new 370 products during the period is credited (for extensive FUD marketing) with giving the clone 370 makers (Amdahl, Hitachi/NAS, etc) their market foothold However, FS complexity was so great that even IBM wasn't able to do it. After FS implodes, there is mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033 and 3081 efforts in parallel. From Ferguson & Morris, "Computer Wars: The Post-IBM World", Time Books, 1993
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
"and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive."

... snip ...

from IBM Jargon
https://comlay.net/ibmjarg.pdf
FS n. Future System. A synonym for dreams that didn't come true. "That project will be another FS". Note that FS is also the abbreviation for "functionally stabilized", and, in Hebrew, means "zero", or "nothing". Also known as "False Start", etc.

FUD (fud) n. Fear, Uncertainty and Doubt. Attributed to Dr. Gene Amdahl after he left IBM to start his own company, Amdahl, who alleged that "F U D" is the fear, uncertainty and doubt that IBM sales people instill in the minds of potential customers who may be considering our [Amdahl] products.


... snip ...

Amdahl leaves IBM shortly after ACS/360 is shutdown (executives afraid that it would advance the state-of-the-art too fast and IBM would loose control of the market). Note towards the end of the page, lists ACS/360 features that show up more than 20yrs later with ES/9000.
https://people.cs.clemson.edu/~mark/acs_end.html

more FS detail
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html
other refs about long term effects
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/

trivia: in aug1976, online commercial TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
started offering their CMS-based online computer conferencing system (precursor to modern social media) "free" to the IBM mainframe user group "SHARE"
https://en.wikipedia.org/wiki/SHARE_(computing)
archives here, can search for user comments about IBM FUD marketing:
http://vm.marist.edu/~vmshare

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
360 clone controller/PCM (plug compatible maker) posts
https://www.garlic.com/~lynn/submain.html#360pcm
commerical, online, virtual machine service bureaus
https://www.garlic.com/~lynn/submain.html#online
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Adventure Game

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Adventure Game
Date: 17 Jan, 2023
Blog: Facebook
trivia: in aug1976, online commercial TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
started offering their CMS-based online computer conferencing system (precursor to modern social media) "free" to the IBM mainframe user group "SHARE"
https://en.wikipedia.org/wiki/SHARE_(computing)
archives here, can search for user comments about IBM FUD marketing:
http://vm.marist.edu/~vmshare

After transferring to San Jose, I would wander around IBM and non-IBM locations in silicon valley, including TYMSHARE. In Aug1976, TYMSHARE had made their CMS-based online computer conferencing system free to SHARE as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with TYMSHARE to get monthly tape backup of all VMSHARE files for putting up on the IBM internal network and systems (including the online world-wide sales&marketing support HONE systems) ... biggest problem I had was with the lawyers that were concerned internal employees could be contaminated directly exposed to customer information.

On one visit to TYMSHARE they demo'ed ADVENTURE ... having found it on Stanford's SAIL PDP10 and ported it to CMS. I made executable available within IBM (and supplied the source for those that got all points). Within short time, versions with lots more points and PLI language implementations started appearing.

TYMSHARE story was that when executive learned about game playing on TYMSHARE computers, he directed that they be removed since TYMSHARE is for business people and business apps. He changed his mind when he was told that game playing had grown to 1/3rd of TYMSHARE revenue.

commerical, online, virtual machine service bureaus
https://www.garlic.com/~lynn/submain.html#online
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

some past posts mentioning TYMSHARE, VMSHARE, and ADVENTURE
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#28 IBM Cambridge Science Center
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#123 SHARE LSRAD Report
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021k.html#102 IBM CSO
https://www.garlic.com/~lynn/2021h.html#68 TYMSHARE, VMSHARE, and Adventure
https://www.garlic.com/~lynn/2021e.html#8 Online Computer Conferencing
https://www.garlic.com/~lynn/2021b.html#84 1977: Zork
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2018f.html#111 Online Timsharing
https://www.garlic.com/~lynn/2017j.html#26 Tech: we didn't mean for it to turn out like this
https://www.garlic.com/~lynn/2017h.html#11 The original Adventure / Adventureland game?
https://www.garlic.com/~lynn/2017f.html#67 Explore the groundbreaking Colossal Cave Adventure, 41 years on
https://www.garlic.com/~lynn/2017d.html#100 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2016e.html#103 August 12, 1981, IBM Introduces Personal Computer
https://www.garlic.com/~lynn/2013b.html#77 Spacewar! on S/360
https://www.garlic.com/~lynn/2012n.html#68 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2011g.html#49 My first mainframe experience
https://www.garlic.com/~lynn/2011f.html#75 Wylbur, Orvyl, Milton, CRBE/CRJE were all used (and sometimes liked) in the past
https://www.garlic.com/~lynn/2011b.html#31 Colossal Cave Adventure in PL/I
https://www.garlic.com/~lynn/2010q.html#70 VMSHARE Archives
https://www.garlic.com/~lynn/2010d.html#84 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2010d.html#57 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009q.html#64 spool file tag data
https://www.garlic.com/~lynn/2008s.html#12 New machine code
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
https://www.garlic.com/~lynn/2006n.html#3 Not Your Dad's Mainframe: Little Iron
https://www.garlic.com/~lynn/2005u.html#25 Fast action games on System/360+?
https://www.garlic.com/~lynn/2005k.html#18 Question about Dungeon game on the PDP
https://www.garlic.com/~lynn/2004k.html#38 Adventure

--
virtualization experience starting Jan1968, online at home since Mar1970

Disk optimization

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Disk optimization
Date: 17 Jan, 2023
Blog: Facebook
I took two semester hr intro to fortran/compueters, at end of semester was hired to rewrite 1401 MPIO for 360/30. Univ. had 709/1401 and was sold 360/67 for TSS/360 as replacement ... getting a 360/30 temporarily replacing 1401 pending arrival of 360/67. The univ. shutdown the datacenter for weekend ... and I would have the place dedicated (although 48hrs w/o sleep made monday classes hard). Within year of intro class after 360/67 arrived, I was hired fulltime responsible OS/360 (tss/360 never came to production fruition so ran as 360/65 with os/360). Some more details
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/

Science center installed virtual machine CP67 (precursor to vm370) at the univ (3rd install after cambridge and mit lincoln labs) and I mostly played with it during my weekend dedicated datacenter windows. Original CP67 I/O was fifo and paging was single 4k transfer at a time. I rewrote 2314 for ordered arm seek and rewrote page I/O to chain multiple page requests in single I/O in optimized rotational order (for both 2314 disk and 2301 fixed head drum) using seek track for records on different tracks at same arm position (original CP67 2301 was 70 page i/os per sec, rewrite could get nearly 270 page i/o per sec, aka 30*9). This was picked up and included in CP67 3.2 and also used for vm370 (although CP67->VM370 dropped and/or greatly simplified a lot of my other undergraduate work ... like dynamic adaptive resource managment, dynamic working set calculations, page replacement algorithms, etc). Trivia: 2301 had room for 4.5 4kbyte blocks/track ... and TSS/360 & CP67 2301 format ... formated nine 4k records per pair of 2301 tracks (with the 5th record spanning the end of the 1st track and the start of the 2nd track).

For VM370 and 3330 paging ... there was room for three 4k pages per track, but for a 155/158 to do track switch CCW proceesing between one 4k page and the next rotating 4k page on different track, retuired formating a 110 byte "dummy" block between 4k records ... which then exceeded 3330 track length. I wrote a program to format different sized dummy records and test being able to do track switch CCW processing between the end of one 4k block and the start of the next ... and had customers run it on every combination with IBM & non-IBM disk controlers as well as IBM & non-IBM processors (some like 158 had integrated channel macrocode that shared execution with the 158 engine execution of 370 microcode). 158 channel processing was the worst/slowess of all systems and disk controllers tested. As aside, the quick&dirty 3033 effort (remap 168 logic to 20% faster chips) used 158 integrated channel microcode and 158 engine (for 303x channel director) with similar slow channel program processing as real 158.

I had also written program that could do full-track read and then rewrite the three 4k pages with specified sized dummy blocks on live production paging devices.

When IBM stoped making 2305 fixed-head disks for 370s ... IBM contracted with Intel for "1655s" electronic paging disks for internal datacenters ... that simulated 1.5mbyte/sec 2305. They could also be configured to run at native (aka effectively FBA) 3mbyte/sec.

re: 3350FH; 2305 had "multiple-exposure" subchannel addresses ... so could queue up channel programs on different subchannel addresses and controller would do real-time choice of channel program to execute. The 3350FH (fixed head feature, like limited 2305 fixed-head disk) only had single subchannel address ... so it wasn't possible to have fixed-head transfer channel program while 3350 disk arm was in motion for seek channel program. I had project to add another exposure to 3350FH ... so I could overlap fixed-head transfer while disk was in motion. However there was a group in POK working on VULCAN ... there own electronic disk for paging and felt my 3350 exposure might impact from their market (getting it canceled). They never got to ship VULCAN ... they were eventually canceled claiming that IBM was selling all electronic memory it was making as higher markup processor memory ... but by that time it was too late to revive my 3350FH multiple exposure.

some more on I/O
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

getting to play disk engineer posts in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk

some archived posts mentioning 3350fh & vulcan
https://www.garlic.com/~lynn/2021k.html#97 IBM Disks
https://www.garlic.com/~lynn/2021j.html#65 IBM DASD
https://www.garlic.com/~lynn/2021f.html#75 Mainframe disks
https://www.garlic.com/~lynn/2017e.html#36 National Telephone Day

a few archived posts referencing the linkedin posts
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
https://www.garlic.com/~lynn/2022h.html#120 IBM Controlling the Market
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022h.html#16 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#11 Computer History IBM 305 RAMAC and 650 RAMAC, 1956 (350 Disk Storage)
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#65 IBM DPD
https://www.garlic.com/~lynn/2022f.html#95 VM I/O
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2022f.html#82 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#79 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#61 200TB SSDs could come soon thanks to Micron's new chip
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#47 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#28 IBM Power: The Servers that Apple Should Have Created

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 17 Jan, 2023
Blog: Facebook
Unix is dead. Long live Unix! Don't expect to see any more big AIX news. This means the last Unix left is ... Linux
https://www.theregister.com/2023/01/17/unix_is_dead/

Note Palo Alto was working with UCB for BSD port to 370 and with UCLA on Locus port that they initially did to series/1. BSD port then was redirected to PC/RT as an alternative to AIX.
https://en.wikipedia.org/wiki/Berkeley_Software_Distribution

Then port of UCLA Locus was done to both 370 (as aix/370) and ps2 (as aix/ps2) ... implementations totally unrelated to Austin AIX.
https://en.wikipedia.org/wiki/Locus_Computing_Corporation
https://dl.acm.org/doi/10.1145/773379.806615

... original UCLA LOCUS implementation
https://en.wikipedia.org/wiki/Locus_Computing_Corporation#AIX_for_IBM_PS/2_and_System/370
Locus was commissioned by IBM to produce a version of the AIX UNIX based operating system for the PS/2 and System/370 ranges. The single-system image capabilities of LOCUS were incorporated under the name of AIX TCF (transparent computing facility).

... snip ...

ROMP was originally going to be follow-on to displaywriter and when that got canceled, they decided to retarget to unix workstation market. They contracted with company that had done AT&T unix port to IBM/PC for PC/IX to do one for ROMP which becomes AIX for PC/RT.

801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc archived posts
https://www.garlic.com/~lynn/subtopic.html#801

a few archived unix posts mentioning locus, bsd, aix/ps2 aix/370
https://www.garlic.com/~lynn/2017d.html#82 Mainframe operating systems?
https://www.garlic.com/~lynn/2008c.html#53 Migration from Mainframe to othre platforms - the othe bell?
https://www.garlic.com/~lynn/2007n.html#87 Why is not AIX ported to z/Series?
https://www.garlic.com/~lynn/2007l.html#7 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007f.html#9 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2006c.html#11 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2005u.html#61 DMV systems?
https://www.garlic.com/~lynn/2005s.html#34 Power5 and Cell, new issue of IBM Journal of R&D
https://www.garlic.com/~lynn/2005j.html#26 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005b.html#22 The Mac is like a modern day Betamax
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004h.html#41 Interesting read about upcoming K9 processors
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
https://www.garlic.com/~lynn/2003h.html#45 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#2 IBM S/360

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 18 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#39 IBM AIX
https://www.garlic.com/~lynn/2021j.html#29 IBM AIX
https://www.garlic.com/~lynn/2021d.html#83 IBM AIX

IBM provided funding for athena at MIT (x-windows, kerberos, etc) ... equally with DEC. IBM also provided funding to CMU for mach, andrew (toolkit & filesystem), camelot, etc. work ... to the tune of about 50% more than the combined ibm/dec funding for MIT athena. I believe ibm also then provided substantial seed for the (camelot) transarc spin-off ... and then paid again substantially when they bought the transarc spin-off outright.

IBM PASC, started working with Locus in the early '80s and doing ports to S/1 and a couple other boxes in Palo Alto ... including process migration and fractional file caching (in addition to distributed access and full file caching).

Early DCE meetings included key people from Locus, Transarc, and several IBM locations.

there was also misc. other unix work-alike. The AT&T/ibm port of UNIX to TSS ... running as a TSS subsystem (I believe saw a large deployment inside of AT&T). prior to that ... in the early '70s, at least one of the online commercial service bureaus spin-offs of the IBM cambridge science center, did cluster version of CP67 (and then vm370) ... This included single system image and process migration (between systems in loosely-coupled configuration in same datacenter as well as to systems in other datacenters) ... they had data centers on both the east coast and the west coast ... and providing 7x24 access to clients around the world. The IBM hardware required regularly (weekly) scheduled preventive maint (PM) and so one motivation was both intra-datacenter and inter-datacenter process migration (there was no time-slot that could be scheduled to take down a box for PM where there weren't users from someplace in the world expecting uninterrupted service).

IBM cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online commercial (virtual machine) service bureaus
https://www.garlic.com/~lynn/submain.html#timeshare

in the wake of the future system failure, there was mad rush to get stuff back into the 370 product pipelines (internal politics during FS including shutting down 370 efforts), which included absorbing people doing advanced technology projects into the 370 product development breach).

I was involved in doing 16-way tightly coupled multiprocessor system and we presented at adtech conference, along with the 801/risc group (before most of the adtech groups absorbed into the development breach). I believe it was last adtech conference until the one I held spring of 1982. Old archived post of the conference and part of the agenda, https://www.garlic.com/~lynn/96.html#4a

including talks on VM/370 UNIX implementation, the TSS/370 UNIX PRPQ for bell labs. Note also a group from Stanford had approached Palo Alto about IBM turning out a workstation product they had developed, PASC had YKT workstation group, SJR 925 group, and Boca ACORN (IBM/PC) group in to review the proposal; all three IBM groups claimed they were doing something better and IBM declined. The Stanford people then formed their own company called SUN.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

trivia: we had con'ed the 3033 processor engineers to work on the 16-way project in their spare time (lot more interesting than remapping 168 logic to 20% faster chips, well before 3033 announce). Everybody thot it was great until somebody told the head of POK that it might be decades before the POK favorite son operating system (MVS) had effective 16way support. The head of POK then directs the 3033 processor engineers heads down on 3033 only (and don't be distracted) and some of us were invited to never visit POK again. NOTE: IBM doesn't ship a 16-way until after the turn of the century with the z900.

SMP, tightly-coupled, multiprocessor and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

some BSD, transarc, & mach posts
https://www.garlic.com/~lynn/2017d.html#41 What are mainframes
https://www.garlic.com/~lynn/2013m.html#65 'Free Unix!': The world-changing proclamation made30yearsagotoday
https://www.garlic.com/~lynn/2013b.html#43 Article for the boss: COBOL will outlive us all
https://www.garlic.com/~lynn/2012b.html#45 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2012.html#66 Has anyone successfully migrated off mainframes?
https://www.garlic.com/~lynn/2011d.html#33 Andrew developments in Rochester
https://www.garlic.com/~lynn/2010i.html#31 IBM Unix prehistory, someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2008c.html#53 Migration from Mainframe to othre platforms - the othe bell?
https://www.garlic.com/~lynn/2008b.html#20 folklore indeed
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2002o.html#32 I found the Olsen Quote

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3081 TCM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3081 TCM
Date: 18 Jan, 2023
Blog: Facebook
During FS, was completely different from 370 and was going to replace all 370s (internal politics as killing off 370 efforts and lack of new 370 products during the period is credited with giving the 370 clone makers their market foothold). When FS implodes, there was mad rush to get stuff back into 370 (hardware & software) product pipelines ... including kicking off quick&dirty 3033&3081 efforts in parallel
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]

... snip ...

... claims that the 3081 required TCMs in order to pack the enormous amount circuity into reasonable volume ... and some end of ACS/360, executives shut it down because they were worried it would advance the state-of-art too fast and IBM would loose control of the market, also has some refs to ACS/360 features showing up more than 20yrs later with ES/9000
https://people.cs.clemson.edu/~mark/acs_end.html

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

... totally different TCM story. The 3880 disk control unit had special hardware path for data transfer (supporting up to 3mbyte/sec) ... however it had significantly slower microprocessor than 3830 precursor disk control unit; greatly increasing channel busy for channel command processing. The 3090 group had sized the number of channels for balanced target system throughput ... assuming 3880 was like a 3830 but capable of 3mbyte/sec transfer. When they found out bad 3880 channel busy really was, the 3090 group realized they had to significantly increase the number of channels (in order to reach target throughput). The increase in channels required an additional TCM ... they semi-facetiously said they would bill the 3880 group for the increase in 3090 manufacturing cost.

trivia: when I transferred to San Jose Research, I got to wander around both IBM and non-IBM datacenters around silicon valley, including disk engineering (bldg14) and disk product test (bldg15) across the street. They were running stand-alone, prescheduled around the clock testing. They said that had tried MVS, but it had 15min mean-time-between failure (MTBF, in that environment), requiring manual re-ipl. I offered to rewrite I/O supervisor making it bullet proof and never fail, allowing any amount of on-demand, concurrent testing (greatly improving productivity). Downside was that whenever they had a problem, they would call me, I had to increasingly diagnose their hardware problems. Along the way, they got the 1st 3033 engineering system (#3 or #4) outside POK, testing took only percent or two of processor ... so found a couple banks of 3330s and put up private online service on the 3033. Early one monday morning, I get a call asking what I had done to the 3033 system over the weekend, there was enormous degradation in throughput. Took an hour or two to get them to admit that somebody had replaced the 3830 with 3880. Testing up until them had been functional, but this was the 1st time any sort of performance operation. They managed to tweak some stuff ... so by the time of 3090, it wasn't nearly as bad as it started out (note: one of the tweaks was caching stuff about previous I/O, if the next I/O was same channel, it helped, ig it was different channel, things were much worse). recent channel I/O
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

posts mentioning playing disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

some posts mentioning 3081 and TCM
https://www.garlic.com/~lynn/2022h.html#117 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022g.html#22 3081 TCMs
https://www.garlic.com/~lynn/2022g.html#7 3880 DASD Controller
https://www.garlic.com/~lynn/2022c.html#109 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#107 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#106 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#5 4361/3092
https://www.garlic.com/~lynn/2022b.html#77 Channel I/O
https://www.garlic.com/~lynn/2022.html#98 Virtual Machine SIE instruction
https://www.garlic.com/~lynn/2022.html#20 Service Processor
https://www.garlic.com/~lynn/2021k.html#105 IBM Future System
https://www.garlic.com/~lynn/2021i.html#66 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021f.html#23 IBM Zcloud - is it just outsourcing ?
https://www.garlic.com/~lynn/2021c.html#66 ACP/TPF 3083
https://www.garlic.com/~lynn/2021c.html#58 MAINFRAME (4341) History
https://www.garlic.com/~lynn/2021.html#52 Amdahl Computers

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM AIX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM AIX
Date: 18 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#40 IBM AIX
https://www.garlic.com/~lynn/2023.html#39 IBM AIX
https://www.garlic.com/~lynn/2021j.html#29 IBM AIX
https://www.garlic.com/~lynn/2021d.html#83 IBM AIX

sometime after leaving IBM, got brought in as consultant into small client/server startup. Two of the former oracel people (that we had worked with on 128-way commercial cluster scale-up) were there responsible for something called "commerce server" and they wanted to do payment transactions on the server. The startup had also invented this technology called "SSL" they waned to use, the result is now frequently called "electronic commerce". I had absolute authority over everything between webservers and payment network ... and had to do a lot of documentation and software for operational. I did a talk about "Why Internet Isn't Business Critical Dataprocessing" based on work I had to do for "electronic commerce". I was doing some stuff with Postel (internet standards editor):
https://en.wikipedia.org/wiki/Jon_Postel
and he sponsored my talk at ISI and USC

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce payment gateway posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

During this period, webserver loads were increasing and many start having >90% processor overhead in running FINWAIT list; closing sessions, sequentially scanned list for incoming packets, assumed list would only have a few items. HTTP/HTTPS had enormous number of (short "transaction") sessions in short period of time. The startup installed Sequent Dynix system ... which had previously fixed the FINWAIT problem some time before.

sequent trivia: before leaving IBM, I was asked to participate in SCI standards. After leaving IBM I did some consulting with Convex Exemplar SCI and later consulted for Steve Chen ... at the time Sequent (before bought by IBM and shutdown) CTO ... related to SCI Numa-Q. At that time, Sequent was also running Windows NT on their other multiprocessors and claimed they had done nearly all the work for NT multiprocessor scale-up.

SMP, tightly-coupled, multiprocessor, and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp

some finwait posts
https://www.garlic.com/~lynn/2022f.html#27 IBM "nine-net"
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#80 OSI Model
https://www.garlic.com/~lynn/2021h.html#86 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021f.html#29 Quic gives the internet's data transmission foundation a needed speedup
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2018d.html#63 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017i.html#45 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017c.html#54 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#52 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#127 Early Networking
https://www.garlic.com/~lynn/2016e.html#43 How the internet was invented
https://www.garlic.com/~lynn/2015h.html#113 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015g.html#96 TCP joke
https://www.garlic.com/~lynn/2015f.html#71 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2015e.html#25 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2015d.html#50 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#2 Knowledge Center Outage May 3rd

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM changes between 1968 and 1989

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM changes between 1968 and 1989
Date: 19 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989

trivia: in aug1976, online commercial TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
started offering their CMS-based online computer conferencing system (precursor to modern social media) "free" to the IBM mainframe user group "SHARE"
https://en.wikipedia.org/wiki/SHARE_(computing)
archives here, can search for user comments about IBM FUD marketing:
http://vm.marist.edu/~vmshare

coworker at science center repsonsible for the internal network
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
technology also used for corporate sponsored univ. bitnet
https://en.wikipedia.org/wiki/BITNET
he tried to get IBM to support internet & failed, SJMN article (behind paywall but mostly free at wayback)
https://web.archive.org/web/20000124004147/http://www1.sjmercury.com/svtech/columns/gillmor/docs/dg092499.htm
additional correspondence with IBM executives
https://web.archive.org/web/20000115185349/http://www.edh.net/bungle.htm

Starting in early 80s, had HSDT project, T1 (1.5mbits/sec) and faster computer links; and was suppose to get $20M from NSF director to interconnect the NSF supercomputer sites. Then congress cuts the budget, some other things happen and finally an RFP is released (based in part on what we already had running). Preliminary Announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

Lots of battles with communication group since their mainframe IBM controllers only supported up to 56kbit/sec links, they were constantly spreading misinformation and FUD trying to preserve their position and dumb terminal paradigm. In mid-80s, they even prepared a report for the corporate executive committee that IBM customers wouldn't be wanting T1 until sometime well into the 90s. 37x5 supported "fat pipes", multiple parallel 56kbit links treated as single link. They surveyed customers and found nobody with more than six 56kbit links in "fat pipe". What they didn't know (or avoided explaining) was typical telco tariff for T1 was about the same as five or six 56kbit links. In trivial survey, we found 200 IBM mainframe customers that would just move to full T1, supported by non-IBM mainframe controller.

a little more on "inventing" the internet
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
online commercial (virtual machine based) service bureaus posts
https://www.garlic.com/~lynn/submain.html#timeshare
IBM demise of wild ducks, downturn, downfall, breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

posts referencing IBM fud(/fear, uncertainty, doubt) marketing
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2021g.html#42 IBM Token-Ring
https://www.garlic.com/~lynn/2021d.html#41 IBM Powerpoint sales presentations
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2018e.html#91 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain
https://www.garlic.com/~lynn/2018e.html#0 Service Bureau Corporation
https://www.garlic.com/~lynn/2018d.html#22 The Rise and Fall of IBM
https://www.garlic.com/~lynn/2018d.html#6 Workplace Advice I Wish I Had Known
https://www.garlic.com/~lynn/2017i.html#35 IBM Shareholders Need Employee Enthusiasm, Engagemant And Passions
https://www.garlic.com/~lynn/2017c.html#28 The ICL 2900
https://www.garlic.com/~lynn/2016g.html#53 IBM Sales & Marketing
https://www.garlic.com/~lynn/2015.html#54 How do we take political considerations into account in the OODA-Loop?
https://www.garlic.com/~lynn/2013m.html#7 Voyager 1 just left the solar system using less computing powerthan your iP
https://www.garlic.com/~lynn/2013j.html#23 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2011j.html#44 S&P Downgrades USA; Time to Downgrade S&P?
https://www.garlic.com/~lynn/2010f.html#81 The 2010 Census
https://www.garlic.com/~lynn/2009d.html#77 Who first mentioned Credit Crunch?
https://www.garlic.com/~lynn/2009c.html#14 Assembler Question
https://www.garlic.com/~lynn/2009b.html#51 Will the Draft Bill floated in Congress yesterday to restrict trading of naked Credit Default Swaps help or aggravate?

--
virtualization experience starting Jan1968, online at home since Mar1970

Adventure Game

From: Lynn Wheeler <lynn@garlic.com>
Subject: Adventure Game
Date: 20 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#37 Adventure Game

.... circa 1980, the author of REX(X) wrote a client/server multi-user (3270) space war game (in PLI) ..... used the internal SPM (internal superset of combined VMCF, IUCV, and SMSG, originally developed by the IBM Pisa science center for CP67) ... to communicate with the server(/controller) ... RSCS/VNET had SPM support so it was able to be played over the network. Very early, robot players appeared that were beating all the human players (in large part because of faster response). The server/controller then was upgraded to have non-linear increase in energy use as interval between commands decreased less than normal human threshold (somewhat leveling the playing field).

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

some past (3270) space war game posts
https://www.garlic.com/~lynn/2021c.html#11 Air Force thinking of a new F-16ish fighter
https://www.garlic.com/~lynn/2021b.html#62 Early Computer Use
https://www.garlic.com/~lynn/2018e.html#104 The (broken) economics of OSS
https://www.garlic.com/~lynn/2015g.html#99 PROFS & GML
https://www.garlic.com/~lynn/2015d.html#9 PROFS
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013j.html#38 1969 networked word processor "Astrotype"
https://www.garlic.com/~lynn/2011i.html#66 Wasn't instant messaging on IBM's VM/CMS in the early 1980s
https://www.garlic.com/~lynn/2011g.html#56 VAXen on the Internet
https://www.garlic.com/~lynn/2010k.html#33 Was VM ever used as an exokernel?
https://www.garlic.com/~lynn/2010h.html#0 What is the protocal for GMT offset in SMTP (e-mail) header time-stamp?
https://www.garlic.com/~lynn/2010d.html#74 Adventure - Or Colossal Cave Adventure
https://www.garlic.com/~lynn/2009j.html#79 Timeline: The evolution of online communities
https://www.garlic.com/~lynn/2005u.html#4 Fast action games on System/360+?
https://www.garlic.com/~lynn/2004m.html#20 Whatever happened to IBM's VM PC software?

some posts mentioning SPM:
https://www.garlic.com/~lynn/2022f.html#94 Foreign Language
https://www.garlic.com/~lynn/2022e.html#96 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022e.html#1 IBM Games
https://www.garlic.com/~lynn/2022c.html#81 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#33 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2021h.html#78 IBM Internal network
https://www.garlic.com/~lynn/2020.html#46 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2018e.html#104 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018c.html#41 S/360 announce 4/7/1964, 54yrs
https://www.garlic.com/~lynn/2017k.html#38 CMS style XMITMSG for Unix and other platforms
https://www.garlic.com/~lynn/2016c.html#1 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
https://www.garlic.com/~lynn/2014.html#1 Application development paradigms [was: RE: Learning Rexx]
https://www.garlic.com/~lynn/2013j.html#42 1969 networked word processor "Astrotype"
https://www.garlic.com/~lynn/2012j.html#7 Operating System, what is it?
https://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
https://www.garlic.com/~lynn/2012e.html#64 Typeface (font) and city identity
https://www.garlic.com/~lynn/2012d.html#38 Invention of Email
https://www.garlic.com/~lynn/2012d.html#24 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011f.html#48 A brief history of CMS/XA, part 1
https://www.garlic.com/~lynn/2011b.html#61 VM13025 ... zombie/hung users
https://www.garlic.com/~lynn/2008n.html#51 Baudot code direct to computers?
https://www.garlic.com/~lynn/2008g.html#41 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2007o.html#68 CA to IBM TCP Conversion
https://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC
https://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC
https://www.garlic.com/~lynn/2006m.html#54 DCSS
https://www.garlic.com/~lynn/2006k.html#51 other cp/cms history
https://www.garlic.com/~lynn/2005n.html#45 Anyone know whether VM/370 EDGAR is still available anywhere?
https://www.garlic.com/~lynn/2005k.html#59 Book on computer architecture for beginners
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 3081 TCM

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 3081 TCM
Date: 20 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM

After joining IBM one of my hobbies was enhanced production operating systems for internal datacenters. I also got to spend time at user group meetings, stop by customers. The director of one of the largest financial industry datacenters liked me to stop by and talk technology. At some point the IBM branch manager horribly offended the customer and in retaliation, they ordered an Amdahl system (up until then Amdahl had been selling into technical/univ accounts, but this would be the 1st true blue customer account, lonely Amdahl system in vast sea of blue). I was then asked to go spend 6-12 months on site at the customer (to help obfuscate why the Amdahl system was being ordered). I talk it over with the customer and then decline. I was then told that the branch manager was good sailing buddy of IBM CEO and the 1st commercial Amdahl order would adversely affect his career. If I didn't do this, I could forget having a career, promotions, and/or raises. I still didn't do it. some more here
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

(FS) storage had tag descriptors a little like object-oriented ... but hardware might require 4-5 storage references to retrieve the actual data. One of the final nails in the FS coffin was analysis by the IBM Houston Science Center showing 370/195 applications run on FS machine made out of the fastest available technology would have throughput of 370/145 (about factor of 30 times slowdown).

It also had one-level-store ... sort of like TSS/360 ... I would claim that I found out what not to do observing TSS/360, when I did a CMS paged-mapped filesystem ... it never shipped in part because the FS experience had given any page-related filesystem such a bad reputation (even tho I could easily show my CMS page-mapped filesystem had at least three times the throughput of the standard filesystem).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
paged-map (cms) filesystem posts
https://www.garlic.com/~lynn/submain.html#mmap

there is folklore that ibm had flow sensor on the inbound side of heat exchange but no flow sensor on the outbound side. one customer lost flow on outbound side ... and by the time thermal sensor tripped it was too late and all the TCMs fried. After that ... flow sensors were installed on all customer outbound side.

some past posts mentioning lacking flow sensor on outboard side
https://www.garlic.com/~lynn/2022h.html#117 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2017j.html#67 US to regain supercomputing supremacy with Summit
https://www.garlic.com/~lynn/2017g.html#56 What is the most epic computer glitch you have ever seen?
https://www.garlic.com/~lynn/2010p.html#26 EPO's (Emergency Power Off)
https://www.garlic.com/~lynn/2010j.html#71 taking down the machine - z9 series
https://www.garlic.com/~lynn/2010d.html#43 What was old is new again (water chilled)
https://www.garlic.com/~lynn/2009b.html#77 Z11 - Water cooling?
https://www.garlic.com/~lynn/2004p.html#41 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#36 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#35 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2002d.html#13 IBM Mainframe at home
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000b.html#36 How to learn assembler language for OS/390 ?

--
virtualization experience starting Jan1968, online at home since Mar1970

MTS & IBM 360/67

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: MTS & IBM 360/67
Date: 21 Jan, 2023
Blog: Facebook
Numerous places were sold 360/67 for TSS/360 ... but TSS/360 never came to real production fruition. Many places dropped back to using it as 360/65 for OS/360. A couple places wrote their own virtual memory operating systems. Some use it for IBM CP67 (precursor to VM370). There were references at one point TSS/360 had 1200 people (Fred Brooks Mythical Man Month and typical IBM Hudson Valley extreme bloat) at time that Science Center had 12 people on CP67/CMS.
https://en.wikipedia.org/wiki/CP/CMS
The IBM Cambridge Science Center for CP40/CMS on a 360/40 with virtual memory hardware mods.
https://www.garlic.com/~lynn/cp40seas1982.txt
lots more 360/67, TSS/360, CP/40, CP/67, & VM/370 lore from Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist

Univ. of Michigan did virtual memory "MTS" (Michigan terminal system) for their 360/67. Later ported to 370 and MTS/370 saw some life at a number of locations (frequently on Amdahl machines). Very early in 360 days, Lincoln Labs had written LLMPS (lincoln labs mulitprogramming system) ... sort of multitasking, super DEBE ... and was contributed to the SHARE software library. Folklore is that Univ. of Mich. initially scaffolded MTS starting with using LLMPS as base.
https://en.wikipedia.org/wiki/Michigan_Terminal_System
https://web.archive.org/web/20221216212415/http://archive.michigan-terminal-system.org/
from somebody that worked on MTS
http://www.eecis.udel.edu/~mills/gallery/gallery7.html
http://www.eecis.udel.edu/~mills/gallery/gallery8.html
other MTS pages have gone 404 ... but still are at wayback machine:
https://web.archive.org/web/20050212073808/www.itd.umich.edu/~doc/Digest/0596/feat01.html
https://web.archive.org/web/20050212073808/www.itd.umich.edu/~doc/Digest/0596/feat02.html
https://web.archive.org/web/20050212183905/www.itd.umich.edu/~doc/Digest/0596/feat03.html
a little MTS lore:
https://web.archive.org/web/20221216212415/http://archive.michigan-terminal-system.org/myths
Germ of Truth. Early versions of what became UMMPS were based in part on LLMPS from MIT's Lincoln Laboratories. Early versions of what would become MTS were known as LTS. The initial "L" in LTS and in LLMPS stood for "Lincoln".

... snip ...

Stanford did Orvyl/Wylbur for their 360/67 ... Orvyl was the virtual memory operating system, Wybur editor was later ported to MVS and still survives in some places
https://en.wikipedia.org/wiki/ORVYL_and_WYLBUR
https://www.slac.stanford.edu/spires/explain/manuals/ORVMAN.HTML
http://www.stanford.edu/dept/its/support/wylorv/

other trivia: Lincoln was 1st location that Cambridge installed CP67. The univ. I was at was their 2nd location (3rd location after Cambridge itself). some more history
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
along with
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

misc. past archived posts mentioning Univ. of Michigan, MTS, 360/67, CP67/CMS
https://www.garlic.com/~lynn/2021j.html#68 MTS, 360/67, FS, Internet, SNA
https://www.garlic.com/~lynn/2021e.html#43 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021b.html#27 DEBE?
https://www.garlic.com/~lynn/2016c.html#6 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015d.html#35 Remember 3277?
https://www.garlic.com/~lynn/2012g.html#25 VM370 40yr anniv, CP67 44yr anniv

--
virtualization experience starting Jan1968, online at home since Mar1970

370/125 and MVCL instruction

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370/125 and MVCL instruction
Date: 21 Jan, 2023
Blog: Facebook
MVCL(/CLCL) trivia: all 360 instructions checked both start and ending address of parameter storage for access before starting execution. 370 MVCL/CLCL were incremental ... so access was tested as instruction executed. However, 115/125 initially implemented MVCL/CLCL as per 360 (take program interrupt before starting execution if couldn't access ending address) . VM/370 was announced for 135 and up ... but not for 125.

I was asked to get VM370 running on 256kbyte 370/125 (NYC office of Norwegian shipping company). VM370 boot had a MVCL instruction clearing storage and testing for end of (real) memory (addressing program interrupt would have end of storage in register). On 125, instruction wouldn't even start because ending address was 16mbytes ... and so appeared to not have sufficient memory. I zapped VM370 IPL code to get around the MVCL.

CP67 would run on 256kbyte machines ... and I had further cut the fixed storage by another 1/3rd to improve things on smaller memory machines. VM370 had also gotten horribly bloated in fixed storage size ... and redid some of the things that I had done for CP67 to improve throughput.

Then the 115/125 group cons me into doing VM370 for five processor 125. 115/125 had nine position memory bus for microprocessors. On 115, all microprocessors (about 800kips) were the same, controllers and 370 (about 80kips 370, avg 10 native instructions per 370 instruction). 125 was the same, but the microprocessor running 370 was 1.2MIPS, 120kips 370, again avg 10 native instruction per 370 instruction) ... including dropping large amounts of VM370 logic into native microcode.

At the same time, Endicott had con'ed me into helping with ECPS ... dropping pieces of VM370 into native microcode for 138/148. In their case, there was 6kbytes of microcode space and 370 instructions would drop approximately byte-for-byte into native code. I was to identify the highest executed 6kbytes of vm370 code for moving into microcode. Old archived post with result of that analysis (top 6kbytes of VM370 kernel executed instructions accounted for 79.55% of VM370 kernel time, moved to native microcode would be ten times faster).
https://www.garlic.com/~lynn/94.html#21

Endicott then objected to the Boeblingen five processor 125 project and escalated to meeting in DPD hdqtrs. I had to sit on both sides of the table arguing for both sides. Endicott did get the Boeblingen effort surppressed.

Endicott then tried to get corporate to allow them to pre-install VM370 on every 138/148 shipped ... but POK was vigorously working to get VM370 product killed ... so that didn't happen.

SMP, multiprocessor, tightly-coupled, and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp
five processor 125 posts
https://www.garlic.com/~lynn/submain.html#bounce

past posts mentioning both five processor 370/125 and 138/148 ECPS
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2021k.html#38 IBM Boeblingen
https://www.garlic.com/~lynn/2021h.html#91 IBM XT/370
https://www.garlic.com/~lynn/2020.html#39 If Memory Had Been Cheaper
https://www.garlic.com/~lynn/2019c.html#33 IBM Future System
https://www.garlic.com/~lynn/2019.html#84 IBM 5100
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#52 All programmers that developed in machine code and Assembly in the 1940s, 1950s and 1960s died?
https://www.garlic.com/~lynn/2018e.html#95 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018e.html#30 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2017g.html#28 Eliminating the systems programmer was Re: IBM cuts contractor bil ling by 15 percent (our else)
https://www.garlic.com/~lynn/2017.html#75 The ICL 2900
https://www.garlic.com/~lynn/2017.html#74 The ICL 2900
https://www.garlic.com/~lynn/2016d.html#65 PL/I advertising
https://www.garlic.com/~lynn/2016d.html#64 PL/I advertising
https://www.garlic.com/~lynn/2016b.html#78 Microcode
https://www.garlic.com/~lynn/2015h.html#105 DOS descendant still lives was Re: slight reprieve on the z
https://www.garlic.com/~lynn/2015g.html#91 IBM 4341, introduced in 1979, was 26 times faster than the 360/30
https://www.garlic.com/~lynn/2015d.html#14 3033 & 3081 question
https://www.garlic.com/~lynn/2015b.html#39 Connecting memory to 370/145 with only 36 bits
https://www.garlic.com/~lynn/2014m.html#107 IBM 360/85 vs. 370/165
https://www.garlic.com/~lynn/2014k.html#23 1950: Northrop's Digital Differential Analyzer
https://www.garlic.com/~lynn/2011p.html#82 Migration off mainframe
https://www.garlic.com/~lynn/2008l.html#85 old 370 info
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous

--
virtualization experience starting Jan1968, online at home since Mar1970

MTS & IBM 360/67

From: Lynn Wheeler <lynn@garlic.com>
Subject: MTS & IBM 360/67
Date: 21 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67

other trivia ... not long after we had both left IBM, I was brought in as consultant for the backend systems for the 2000 Census and was also asked to handle audits by other agencies. After one such standing up in front of the room nearly all day answering questions ... were sitting around table with the primary person asking all the questions with other people from census ... and the guy said he had got graduate degree in computer engineering from UofMich. My wife says so did she and asked him what year ... he replied and my wife said so did she and she was the only female. He said no you weren't and named somebody else ... she said that was her. He replied you sure look older.

a few past posts mentioning 2000 census and my wife at UofMich
https://www.garlic.com/~lynn/2021d.html#90 Bizarre Career Events
https://www.garlic.com/~lynn/2015.html#72 George W. Bush: Still the worst; A new study ranks Bush near the very bottom in history
https://www.garlic.com/~lynn/2012k.html#87 Cultural attitudes towards failure
https://www.garlic.com/~lynn/2010f.html#54 The 2010 Census
https://www.garlic.com/~lynn/2008d.html#63 was: 1975 movie "Three Days of the Condor" tech stuff
https://www.garlic.com/~lynn/2003p.html#12 Danger: Derrida at work

--
virtualization experience starting Jan1968, online at home since Mar1970

23Jun1969 Unbundling and Online IBM Branch Offices

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 23Jun1969 Unbundling and Online IBM Branch Offices
Date: 21 Jan, 2023
Blog: Facebook
Note: branch office 2741 terminals and 23Jan1969 unbundling announcement, IBM started to charge for software, maint., SE services, etc. Previously part of SE training was sort of apprentice program as part of large group onsite at customer. After unbundling announce, IBM couldn't figure out how not to charge for trainee SEs onsite at the customer (large SE group constantly onsite at customer, disrupting IBM's traditional account control, also disrupted critical part of training of new generations of SEs). To address part of the SE training, HONE (hands-on network experience) was created with branch office online access to CP67 (360/67 precursor to VM/370) datacenters for practice running guest operating systems. When 370 initially announced, the HONE CP67 systems were enhanced with simulation for the new 370 instructions ... allowing branch office SEs running 370 guest operating systems (in virtual machines).

One of my hobbies after joining IBM was enhanced production systems for internal datacenters and HONE was long time customer. The cambridge science center had also ported APL\360 to CMS for CMS\APL ... fixing its memory management for large virtual memory workspaces (APL\360 traditionally was 16kbyte workspaces) for demand page environment and added APIs for system services (like file i/o, enabling lots of real world applications). HONE then started offering online APL-based sales&marketing tools ... which then came to dominate all HONE activity ... and SE training with guest operating systems just dwindled away (with SE skill level dropping ... increasingly become phone directory for internal IBM technical contacts).

Some of my earliest IBM trips overseas were HONE asking me to go along for the early non-US HONE installs, first was for EMEA in brand new bldg in Paris, La Defense (hadn't been finished, including landscaping was still brown dirt) then Tokyo (AFE).

Then mid-70s, the US HONE datacenters were consolidated in Palo Alto, also with loosely-coupled, shared DASD, single-system-image; load-balancing and fall-over across complex. Note, in the CP67->VM370, a lot of feature/function was dropped and/or simplified (including multiprocessor support). I spent a lot of 1974, adding it back into VM370 ... some old archived email
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

early 1975, I was also asked to do a VM370 for the 370/125 group that would support up to five procesors in a tightly-coupled multiprocessor system and ECPS microcode assist for Endicott 138/148 group (Endicott 138/148 got the 125 multiprocessor work canceled because they felt that 5-125 would overlap their performance). Endicott wanted to pre-install VM/370 on every 138/148 shipped, but POK was in the process of convincing corp. to kill VM370 product; and corp veto'ed it. I then retrofitted the multiprocessor support to VM370 for the 370/168 systems at US HONE in Palo Alto (allowing them to double the number of processors in their single-system image complex). Note extensive branch office HONE keyboard activity (first 2741, then 3270) well before PROFS.

learson trying to block the bureaucrats, careerists (and MBAs) destroying Watson legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
misc other
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
other refs about long term effects destroying watson legacy
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/

posts mentioning 23jun1969 unbundling
https://www.garlic.com/~lynn/submain.html#unbundle
posts mentioning HONE
https://www.garlic.com/~lynn/subtopic.html#hone
SMP, multiprocessor, tightly-coupled, and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp
five processor 125 posts
https://www.garlic.com/~lynn/submain.html#bounce
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

370 Virtual Memory Decision

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 370 Virtual Memory Decision
Date: 23 Jan, 2023
Blog: Facebook
MVT gave us virtual memory for all 370s. Decade+ ago I was asked if I could track down the decision. I eventually found somebody on staff to executive making decision. Basically MVT storage management was so bad that regions had to be specified four times larger than actually used, as result typical 1mbyte 370/165 only had enough storage for running four regions concurrently; insufficient to keep the system busy and justified. Going to 16mbyte virtual memoy would allow increasing number of regions by a factor of four times with little or no paging. Initially VS2/SVS, little different than running MVT in a 16mbyte CP67 virtual machine. In fact, biggest piece of code was SVC0/EXCP that had same problem as CP67 ... the passed channel programs would have virtual addresses and a copy channel program would have to be made substituting the real addresses for the virtual addresses (the initial implementation in fact, borrowed CCWTRANS from CP67 to craft into SVC0/EXCP). Old post with pieces of the email exchange:
https://www.garlic.com/~lynn/2011d.html#73

... above also references that plan was VS2R1 (SVS) to VS2R2 (MVS) and then FS VS2R3 (aka "Future System"). FS was completely different from 370 and was going to completely replace it. During FS, internal politics was starting to shutdown 370 efforts and the lack of new 370 products is credited with giving clone 370 makers their market foothold. When FS implodes, there is mad rush to get stuff back into 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel. Lots more FS
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

One of the final nails in the FS coffin was analysis by the IBM Houston Science Center ... that if 370/195 apps were converted to FS machine made out of the fastest available technology, they would have throughput of 370/145 (about factor of 30times slowdown).

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Bureaucrats, Careerists, MBAs (and Empty Suits)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
Date: 23 Jan, 2023
Blog: Facebook
My long-winded tome that Learson failed trying to save Watson legacy from the bureaucrats, careerists (and MBAs)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

After joining IBM, I drank the koolaid and wore 3piece suits for customers ... however after being told I had no career (because I wouldn't take part of a coverup for the CEO's good sailing buddy) ... after that, never bothered again. Besides hobby of doing enhanced production operating systems for internal datacenters ... and wandering around internal datacenters ... I spent some amount of time at user group meetings (like SHARE) and wandering around customers.

Director of one of the largest (customer) financial datacenters liked me to drop in and talk technology. At one point, the branch manager horribly offended the customer and in retaliation, they ordered an Amdahl machine (lonely Amdahl clone 370 in a vast sea of "blue). Up until then Amdahl had been selling into univ. & tech/scientific markets, but clone 370s had yet to break into the IBM true-blue commercial market ... and this would be the first. I got asked to go spend a 6m-12m on site at the customer to obfuscate the reason for the Amdahl order. I talked it over with the customer and said while he would like to have me there it would have no affect on the decision, so I declined the offer. I was then told the branch manager is sailing buddy of IBM CEO and I can forget a career, promotions, raises (more detail in the wild ducks tome) ... after that, I give up on suits and white shirts ... do get feedback from customers that it was refreshing to have somebody other than the standard IBM "empty suits".

some further effects of bureaucrats, careerists (and MBAs)
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/
technical related
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

IBM downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some past archive posts mentioning (IBM) Empty Suits
https://www.garlic.com/~lynn/2022g.html#66 IBM Dress Code
https://www.garlic.com/~lynn/2021d.html#85 Bizarre Career Events
https://www.garlic.com/~lynn/2021d.html#66 IBM CEO Story
https://www.garlic.com/~lynn/2018f.html#68 IBM Suits
https://www.garlic.com/~lynn/2018e.html#27 Wearing a tie cuts circulation to your brain

some recent archived posts mentiong z/VM 50th
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2023.html#28 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
https://www.garlic.com/~lynn/2022h.html#124 Corporate Computer Conferencing
https://www.garlic.com/~lynn/2022h.html#120 IBM Controlling the Market
https://www.garlic.com/~lynn/2022h.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022h.html#94 IBM 360
https://www.garlic.com/~lynn/2022h.html#86 Mainframe TCP/IP
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022h.html#77 The Internet Is Having Its Midlife Crisis
https://www.garlic.com/~lynn/2022h.html#72 The CHRISTMA EXEC network worm - 35 years and counting!
https://www.garlic.com/~lynn/2022h.html#59 360/85
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2022h.html#43 1973 ARPANET Map
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#24 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#16 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#8 Elizabeth Warren to Jerome Powell: Just how many jobs do you plan to kill?
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Bureaucrats, Careerists, MBAs (and Empty Suits)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
Date: 25 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)

... and hand drawn charts
https://www.garlic.com/~lynn/2022f.html#57 The Man That Helped Change IBM
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021j.html#29 IBM AIX
https://www.garlic.com/~lynn/2021d.html#83 IBM AIX
https://www.garlic.com/~lynn/2021c.html#50 IBM CEO
https://www.garlic.com/~lynn/2021c.html#32 IBM Executives
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)

I was in all-hands Austin meeting where it was said that Austin had told IBM CEO that it was doing RS/6000 project for NYTimes to move their newspaper system (ATEX) off VAXCluster ... but it would be dire consequences for anybody to let it leak that it wasn't being done.

One day Nick stopped in Austin and all the local executives were out of town. My wife put together hand drawn charts and estimates for doing the NYTimes project for Nick ... and he approved it; possibly contributed to offending so many people in Austin that suggested that we do the project in San Jose.

It started out as HA/6000, but I rename it HA/CMP (High Availability Cluster Multi-Processing) after starting doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (who had VAXCluster support in the same source base with Unix support ... providing some APIs with VAXCluster semantics made it easier for port to HA/CMP). Within a couple weeks after Jan1992 cluster scale-up meeting with Oracle CEO (16way mid92, 128way ye92), cluster scale-up is transferred (to be announced as IBM supercomputer for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors. A few months later, we leave IBM.

The Man That Helped Change IBM
https://smallbiztrends.com/2022/08/the-man-that-helped-change-ibm.html
This week I celebrated my 700th episode of The Small Business Radio Show with Nicholas (Nick) Donofrio who began his career in 1964 at IBM. Ironically, I started at IBM in 1981 for the first 9 years of my career. Nick lasted a lot longer and remained there for 44 years. His leadership positions included division president for advanced workshops, general manager of the large-scale computing division, and executive vice president of innovation and technology. He has a new book about his career at IBM called "If Nothing Changes, Nothing Changes: The Nick Donofrio Story".

... snip ...

or at least try
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

Adventure Game

From: Lynn Wheeler <lynn@garlic.com>
Subject: Adventure Game
Date: 25 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#37 Adventure Game
https://www.garlic.com/~lynn/2023.html#44 Adventure Game

Nearly all internal IBM logon screens had "For Business Purposes Only" ... IBM San Jose Research had "For Management Approved Uses Only". One year we got visit by corporate security for audit. One of the first things they wanted to do was remove all games. Big public arguments in the auditorium ("games" had been classified as "computer demo" and "human factor research" projects). 6670s (ibm copier3 with computer interface) had been distributed throughout the bldg. The alternate paper drawer had been loaded with colored paper and was used to print the output "separator" page. Since the "separator" page was nearly all blank, we further modified the driver to print random selected quotations. The security audit was also doing off-shift sweeps looking for unsecured classified material ... including 6670 printed output. They found one with separator page that had quotation:
[Business Maxims:] Signs, real and imagined, which belong on the walls of the nation's offices:

1) Never Try to Teach a Pig to Sing; It Wastes Your Time and It Annoys the Pig.
2) Sometimes the Crowd IS Right.
3) Auditors Are the People Who Go in After the War Is Lost and Bayonet the Wounded.
4) To Err Is Human -- To Forgive Is Not Company Policy.


... snip ...
... they complained to management that we had done it on purpose to ridicule them. As an aside, one of the excuses was that people would just disguise private copies ... cluttering up the system worse than having single public versions.

past posts mentioning "Business Maxims"
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2021h.html#41 IBM Acronyms
https://www.garlic.com/~lynn/2021.html#85 IBM Auditors and Games
https://www.garlic.com/~lynn/2018.html#103 1956 -- circuit reliability book
https://www.garlic.com/~lynn/2015h.html#67 IMPI (System/38 / AS/400 historical)
https://www.garlic.com/~lynn/2012p.html#57 Displaywriter, Unix manuals added to Bitsavers
https://www.garlic.com/~lynn/2012l.html#6 Some fun with IBM acronyms and jargon (was Re: Auditors Don't Know Squat!)
https://www.garlic.com/~lynn/2012e.html#95 Burroughs B5000, B5500, B6500 videos
https://www.garlic.com/~lynn/2012b.html#26 Strategy subsumes culture
https://www.garlic.com/~lynn/2012.html#45 You may ask yourself, well, how did I get here?
https://www.garlic.com/~lynn/2011g.html#21 program coding pads
https://www.garlic.com/~lynn/2011f.html#62 Mixing Auth and Non-Auth Modules
https://www.garlic.com/~lynn/2011.html#89 Make the mainframe work environment fun and intuitive
https://www.garlic.com/~lynn/2010k.html#49 GML
https://www.garlic.com/~lynn/2009m.html#1 Does this count as 'computer' folklore?
https://www.garlic.com/~lynn/2008p.html#71 Password Rules
https://www.garlic.com/~lynn/2008o.html#69 Blinkenlights
https://www.garlic.com/~lynn/2008o.html#68 Blinkenlights
https://www.garlic.com/~lynn/2007b.html#36 Special characters in passwords was Re: RACF - Password rules
https://www.garlic.com/~lynn/2005f.html#51 1403 printers
https://www.garlic.com/~lynn/2002k.html#61 arrogance metrics (Benoits) was: general networking

--
virtualization experience starting Jan1968, online at home since Mar1970

Classified Material and Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Classified Material and Security
Date: 25 Jan, 2023
Blog: Facebook
At end of semester taking two credit hr intro to fortran/computers, I was hired to rewrite 1401 MPIO in 360/30 assembler. Univ had been sold 360/67 (for tss/360) to replace 709/1401. 360/30 was brought in temporarily (which had 1401 emulation), to replace 1401 (pending arrival 360/67). Univ. shutdown datacenter for weekends and I had the place dedicated (although 48hrs w/o sleep made monday class hard) ... they gave me a bunch of documents and I got to design monitor, device drivers, interrupt handlers, storage management, error recovery, etc. ... within a few weeks I had 2000 card 360 assembler program. Then with a year of taking intro class, the 360/67 had come in and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition and so 360/67 ran as 360/65 with os/360) ... and I continued to have my weekend dedicated time (later Univ. gets early CP67 from IBM cambridge center and I could play with it on weekends, rewriting a lot the code). Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in independent business to better monetize the investment, including offering services to non-Boeing entities. I thot Renton datacenter was possibly largest in the world, couple hundred million $$$, 360/65s arriving faster than they could be installed; boxes constantly staged in hallways around the machine room. They had one 360/75, when used for classified work would deploy a black rope around the 75 area, guards on the corners, black velvet lowered over front panel lights, printer "windows" (front&back). Boeing badges had plastic bar, color indicating employee/management level and engraved name with color that indicated security clearance level. Lots of politics between renton director and CFO ... who only had 360/30 up at Boeing field for payroll; although they enlarge it to install 360/67 for me to play with CP67 (when I'm not doing other stuff).

After I graduate, I join IBM science center (some of the MIT CTSS/7094 people had gone to project MAC on 5th flr, others went to the science center on the 4th flr). A lot of the CP67 code I had rewritten as undergraduate was picked up by IBM and shipped. I get asked to give computer/security classes at gov. agencies ... some old ref (gone 404, but lives on)
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
from Melinda's history
https://www.leeandmelindavarian.com/Melinda#VMHist

disclaimer: never worked for the gov. and didn't/don't have clearance ... but they made sure to tell me (offline) that they know where I was every day of my life back to birth and challenged me to specify dates. I've guessed they justified it because they ran so much of my software ... and it was before the Church commission (I did get into habit of always reminding them I don't have a clearance).

trivia: after it was decided to add virtual memory to all 370s, some of the science center people split off and took over the IBM Boston Programming Center on the 3rd flr (to do VM370). It was only part of the 3rd, bldg directory listed law firm on the rest of the 3rd. However the 3rd flr telco closet was on the IBM side and the one panel had "IBM" in large letters and the other panel had a 3letter gov agency. Note that the Boston/Cambridge anti-war activity had stationed people all over the area and phoned in threat for the 3letter agency to the Boston FBI office ... looking for what bldg (545 tech sq) that got evacuated.

Staff, profs, students from Boston area institutions had online use of the Cambridge system ... and we had to demonstrate really strong security ... especially after business planners in Armonk corp hdqtrs loaded the most valuable company information on the system and were using it remotely.

... not classified, but for secure internet; Last product we did at IBM was HA/CMP. It started out as HA/6000, but I rename it HA/CMP (High Availability Cluster Multi-Processing) after starting doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (who had VAXCluster support in the same source base with Unix support ... providing some APIs with VAXCluster semantics made it easier for port to HA/CMP). There is Jan1992 meeting with Oracle CEO on cluster scale-up (16way mid92, 128way ye92); old post
https://www.garlic.com/~lynn/95.html#13

Within a few weeks cluster scale-up is transferred (to be announced as IBM supercomputer for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors. A few months later, we leave IBM.

Later we are brought into small client/server startup as consultants, two of the former Oracle people (in the Ellison meeting) are there responsible for something called "commerce server", the startup had also invented this technology called "SSL" they want to use, it is now frequently called "electronic commerce". I have responsibility for everything between webservers and the financial payment networks. Afterwards I do a talk on "How Internet Isn't Business Critical Dataprocessing" ... based on compensating stuff I had to do for "electronic commerce". I was also doing some stuff with Postel
https://en.wikipedia.org/wiki/Jon_Postel

and he sponsors my talk at ISI/USC. Misc. other stuff
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
electronic commerce payment network posts
https://www.garlic.com/~lynn/subnetwork.html#gateway
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some specific past posts mentioning internet and business critical dataprocessing
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#33 IBM "nine-net"
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2022b.html#68 ARPANET pioneer Jack Haverty says the internet was never finished
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2021k.html#128 The Network Nation
https://www.garlic.com/~lynn/2021k.html#87 IBM and Internet Old Farts
https://www.garlic.com/~lynn/2021k.html#57 System Availability
https://www.garlic.com/~lynn/2021j.html#55 ESnet
https://www.garlic.com/~lynn/2021j.html#42 IBM Business School Cases
https://www.garlic.com/~lynn/2021h.html#83 IBM Internal network
https://www.garlic.com/~lynn/2021h.html#72 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2021h.html#24 NOW the web is 30 years old: When Tim Berners-Lee switched on the first World Wide Web server
https://www.garlic.com/~lynn/2021d.html#16 The Rise of the Internet
https://www.garlic.com/~lynn/2019d.html#113 Internet and Business Critical Dataprocessing
https://www.garlic.com/~lynn/2019b.html#100 mainframe hacking "success stories"?
https://www.garlic.com/~lynn/2017f.html#23 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/2017e.html#75 11May1992 (25 years ago) press on cluster scale-up
https://www.garlic.com/~lynn/2017e.html#70 Domain Name System
https://www.garlic.com/~lynn/2015e.html#10 The real story of how the Internet became so vulnerable

--
virtualization experience starting Jan1968, online at home since Mar1970

z/VM 50th - Part 6, long winded zm story (before z/vm)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: z/VM 50th - Part 6, long winded zm story (before z/vm).
Date: 26 Jan, 2023
Blog: Facebook
z/VM 50th - Part 6 .. long winded zm story (before z/vm).
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/

In the early 70s, there was Future System project, totally replace all 360/370 and totally different (internal politics was shutting down 370 efforts, and the lack of new 370 products during the period is credited with giving the clone 370 makers their market foothold). Then when FS implodes (one of the final nails in the coffin was analysis by IBM Houston Science Center that if 370/195 applications were redone for FS machine made out of fastest available hardware, they would have throughput of 370/145 ... factor of 30times slowdown), there is mad rush to get stuff back into the 370 product pipelines, including quick&dirty 3033&3081 efforts in parallel. Lot more FS details
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

I had been involved in a couple different 370 multiprocessor efforts, including a 16processor machine and got the 3033 processor engineers working on it in their spare time (a lot more interesting than remapping 168 logic to 20% faster chips) that everybody thot was really good ... that is until somebody told the head of POK that it could be decades before POK favorite son operating system (MVS) had effective 16-way support. Then some of us were invited to never visit POK again and the 3033 processor engineers were directed to heads down on 3033 *only*. There is a advanced technology conference where we present 16way 370 and the 801 group present 801/RISC (I claim it was last adtech conference for some years since lots of adtech groups were being thrown into development breach filling the 370 product pipelines). Note: POK doesn't ship 16proc system until after turn of century. Related 85/165/168/3033/3090 email
https://www.garlic.com/~lynn/2019c.html#email810423

Then the head of POK convinces corporate to kill the vm370 product, shutdown the development group, and transfer all the people to POK (claiming that otherwise MVS/XA wouldn't ship on time). They weren't planning to tell the VM370 people until the very last minute to minimize the number that might escape into the Boston area. The information managed to leak and several escaped. This was about the time of VAX/VMS inception and joke was the head of POK was major contributor to VMS. There was witch hunt for the leak, fortunately for me, nobody gave up the leaker. Endicott manages to save the VM product mission, but has to reconstitute a development group from scratch (some customer complaints about code quality during the period). I also was con'ed into helping Endicott do microcode assist for 138/148 (also used in 4331/4341) ... old post with analysis
https://www.garlic.com/~lynn/94.html#21

In the early 80s, I got approval to do presentations on ECPS implementation at customer user group meetings. After presentations at BAYBUNCH (hosted by Stanford SLAC), the Amdahl people would grill me on some of the details. They said they had developed MACROCODE during the 3033 period (370 like instructions running in microcode mode ... enormously simpler and easier to develop than high end machine horizontal microcode). The issue was that IBM was doing a series of trivial 3033 microcode changes required for running MVS ... and MACROCODE made it almost trivial to respond. They were then working on HYPERVISOR, simplified virtual machine in "hardware" (several years later IBM ships LPAR/PR/SM for 3090; took much longer since done in low level horizontal microcode).

POK was finding customers weren't converting from MVS to MVS/XA as planned ... and Amdahl HYPERVISOR made conversion easier by being able to run both MVS and MVS/XA concurrently. Separate from the issue that Amdahl single processor was about same performance as two processor 3081K (and two processor was much better performance than four processor 3081K). Now some of the VM370 went to POK and had done simplified VMTOOL for MVS/XA development (never intended for shipping to customers) ... and it was decided to ship VMTOOL as VM/MA (migration aid) and later VM/SF (system facility) as aid in MVS to MVS/XA conversion. Part of the VMTOOL was also the SIE instruction (both VMTOOL and 3081 SIE instruction were never targeted for performance, production and/or anything other than development; including 3081 was short of microcode space and executing SIE instruction required the microcode to be "paged in"). Old email from trout/3090 processor engineer (as soon as 3033 was out the door, they had started on trout) and they did SIE instruction designed for performance:
https://www.garlic.com/~lynn/2006j.html#email810630

POK then got interested in re-igniting a production virtual machine product for high-end machine and had proposal for couple hundred people (moved to IBM Kingston) to upgrade VMTOOL (VM/MA, VM/SF) to VM370 function and performance. Endicott had an alternative, a system programmer in Rochester had upgraded VM370 with full 370/XA support ... of course POK won.

In the early 80s, I started looking at rewriting VM370 in higher level language. I did a reimplementation of VM370 spool file in Pascal/VS running in virtual address space. The spool file diagnose that did 4k page transfer was synchronous so difficult for high-speed multi-threaded operation. I had HSDT project with T1 (1.5mbits/sec) and faster computer links. For HSDT, I needed RSCS to handle 70 or so 4k transfers/sec per T1 link (contiguous allocation for large files, multi-page block reads & writes, etc) ... and the diagnose interface restricted RSCS to 5-10 4k transfers/sec aggregate (depending on rest of system spool file activity). I held advanced technology conference spring 1982 ... part of the agenda:
https://www.garlic.com/~lynn/96.html#4a

Old 1987 email about trying to get permission for talk on HSDT-SFS at VM Workshop
https://www.garlic.com/~lynn/2011e.html#email870204

The IBM Kingston group got interested in programming in higher level language and series of meetings were scheduled in IBM Kingston ... to be held in function room off the cafeteria. The cafeteria had misunderstood and posted signs for the "ZM" meeting (instead of "VM"). IBM Kingston then assumed control of the effort (called "ZM" after the cafeteria mistake) and at one point had couple hundred people writing specs ... before it was shutdown.

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
multiprocessor, smp, tightly-coupled and/or compare-and-swap posts
https://www.garlic.com/~lynn/subtopic.html#smp
hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

a couple other specific posts mentioning HSDT-SFS:
https://www.garlic.com/~lynn/2022.html#85 HSDT SFS (spool file rewrite)
https://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?

Previous posts in series:
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

recent posts mentiong z/VM 50th
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2022h.html#77 The Internet Is Having Its Midlife Crisis
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#40 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#95 VM I/O
https://www.garlic.com/~lynn/2022f.html#73 IBM/PC
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#71 COMTEN - IBM Clone Telecommunication Controller
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#47 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th

--
virtualization experience starting Jan1968, online at home since Mar1970

Classified Material and Security

From: Lynn Wheeler <lynn@garlic.com>
Subject: Classified Material and Security
Date: 26 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security

As per my previous post in this thread, I was initially responsible for the "electronic commerce" webserver connections to the financial industry payment networks.

Later we were asked to help word-smith some cal. state legislation. One of the bills they were working on at the time was breach notification. Several of the people were involved in privacy issues and had done several public surveys. The number one issue was data breaches resulted in fraudulent financial transactions. At the time, little or nothing was being done. Nominally organizations take security measures in self protection ... but in this case the institutions weren't at risk, it was the public. It was hoped that publicity from notifications might motivate countermeasures.

Then there was dozen or so federal (state "preemption") bills introduced, about half similar to cal. state legislation and half that effectively had breach requirements that would never be met (and therefor preclude notification). Early on, one of these involved the card associations publishing industry security specification ... and institutions that were certified to the specification wouldn't be required to make notifications. Joke was that every time a certified institution had a breach, their certification was revoked i.e. the only (active) certifications were for institutions that hadn't yet had a breach. I've periodically commented that some gov. contractors seem to have similar attitude that they aren't at risk for a breach, it is the gov (unless it is specifically specified in the contract).

posts mentioning electronic commerce and payment network gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway
posts mentioning assurance
https://www.garlic.com/~lynn/subintegrity.html#assurance
risk, fraud, expoloits, threats, vulnerability posts
https://www.garlic.com/~lynn/subintegrity.html#fraud
account number harvesting posts
https://www.garlic.com/~lynn/subintegrity.html#harvest
posts mentioning data breach notification
https://www.garlic.com/~lynn/submisc.html#data.breach.notification

--
virtualization experience starting Jan1968, online at home since Mar1970

Almost IBM class student

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Almost IBM class student
Date: 27 Jan, 2023
Blog: Facebook
Almost IBM class student; Jan1968, three from IBM cambridge center came out to install CP67 (precursor to VM370) ... which I mostly played with during my dedicated weekend window and rewrote a lot of CP67 code (I also continued redoing lots of OS360). Jun1968 IBM CSC scheduled one week CP67/CMS class at Beverly Hills Hilton. I arrive for the class Sunday night and am told the IBM CP67 members had resigned on Friday (forming one of the 60s online commercial CP67 spinoffs of CSC) and asked if I would teach the class. Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in an independent business unit to better monetize the investment). However, when I graduate, I join IBM CSC (instead of staying at Boeing).

... HONE trivia, after 23Jan1969 unbundling announcement, IBM started to charge for software, maint., SE services, etc. Previously part of SE training was sort of apprentice program as part of large group onsite at customer. After unbundling announce, IBM couldn't figure out how not to charge for trainee SEs onsite at the customer (large SE group constantly onsite at customer, disrupting IBM's traditional account control, also disrupted critical part of training of new generations of SEs). To address part of the SE training, HONE (hands-on network experience) was created with branch office online access to CP67 (360/67 precursor to VM/370) datacenters for practice running guest operating systems. When 370 initially announced, the HONE CP67 systems were enhanced with simulation for the new 370 instructions ... allowing branch office SEs running 370 guest operating systems (in virtual machines).

One of my hobbies after joining IBM was enhanced production systems for internal datacenters and HONE was long time customer. The cambridge science center had also ported APL\360 to CMS for CMS\APL ... fixing its memory management for large virtual memory workspaces (APL\360 traditionally was 16kbyte workspaces) for demand page environment and added APIs for system services (like file i/o, enabling lots of real world applications). HONE then started offering online APL-based sales&marketing tools ... which then came to dominate all HONE activity ... and SE training with guest operating systems just dwindled away (with SE skill level dropping ... increasingly become phone directory for internal IBM technical contacts).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
hone (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
23jun1969 unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle

some posts menitoning boeing computer services:
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#82 Boeing's last 747 to roll out of Washington state factory
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022g.html#49 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#26 Why Things Fail
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022g.html#11 360 Powerup
https://www.garlic.com/~lynn/2022f.html#42 IBM Bureaucrats
https://www.garlic.com/~lynn/2022e.html#42 WATFOR and CICS were both addressing some of the same OS/360 problems
https://www.garlic.com/~lynn/2022d.html#110 Window Display Ground Floor Datacenters
https://www.garlic.com/~lynn/2022d.html#106 IBM Quota
https://www.garlic.com/~lynn/2022d.html#100 IBM Stretch (7030) -- Aggressive Uniprocessor Parallelism
https://www.garlic.com/~lynn/2022d.html#95 Operating System File/Dataset I/O
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#57 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022d.html#19 COMPUTER HISTORY: REMEMBERING THE IBM SYSTEM/360 MAINFRAME, its Origin and Technology
https://www.garlic.com/~lynn/2022d.html#10 Computer Server Market
https://www.garlic.com/~lynn/2022c.html#72 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#2 IBM 2250 Graphics Display
https://www.garlic.com/~lynn/2022c.html#0 System Response
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022b.html#73 IBM Disks
https://www.garlic.com/~lynn/2022b.html#35 Dataprocessing Career
https://www.garlic.com/~lynn/2022b.html#10 Seattle Dataprocessing
https://www.garlic.com/~lynn/2022.html#120 Series/1 VTAM/NCP
https://www.garlic.com/~lynn/2022.html#73 MVT storage management issues
https://www.garlic.com/~lynn/2022.html#48 Mainframe Career
https://www.garlic.com/~lynn/2022.html#22 IBM IBU (Independent Business Unit)
https://www.garlic.com/~lynn/2022.html#12 Programming Skills

--
virtualization experience starting Jan1968, online at home since Mar1970

Almost IBM class student

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Almost IBM class student
Date: 27 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#57 Almost IBM class student

When I 1st joined IBM (CSC), IBM was going through fast growth and after a few months was asked to be a manager. I asked to read the manager's manual over the weekend and came back on Monday and explained why I wouldn't make a good IBM manager (never was asked again).

My father died when I was in Junior High and after that I always had a job to help my mother. Summer I was 11, bucking hay bales w/o gloves, 4July, light firecrackers in palm to demonstrate calluses. I worked for the local hardware store in high school and periodically would be loaned out to local contractors (concrete, framing, roofing, plumbing, drywall, electrical, whatever was needed) ... saving enough money to start college after I graduate. Summer after freshman year I'm foreman on construction job (started with three nine-person crews) ... and one of the ways of dealing with issues was in the parking lot after work ... which isn't exactly a white collar thing. It had been really wet spring and project was way behind schedule ... and started 80+hr weeks, time&half and double time (I was long at IBM before making more per month than that summer). I usually had to go in 30min before start to make sure everything was planned for the 12+hr (w/breaks&lunch) day and supplies were on hand (some of the items had several day lead time, before PERT charts, all items/schedules memorized and constantly checking ... needed things planned out weeks ahead and might have to adjust if there were late deliveries). Overall playing foreman only took 20-30% of my time, rest of time I filled in where ever it was needed.

Back at school take a 2semester hr intro to fortran/computers and at end of semester am hired to re-implement 1401 MPIO on 360/30. Univ was sold 360/67 for tss/360 replacing 709/1401 and 360/30 (that supported 1401 emulation) was temporary until 360/67 comes in. Univ. shuts down datacenter on weekends, which I then have dedicated (although monday classes are hard after 48hrs w/o sleep). I'm given a bunch of manuals and get to design & implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc and within a few weeks I have 2000 card assembler program. Within a year of taking intro class, 360/67 arrives and I'm hired fulltime responsible for os/360 (tss/360 never comes to production so 360/67 ran as 360/65 with os/360) ... and continue to have my dedicated weekend window (1st 360/30 as my personal computer and then 360/67).

Before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in independent business unit to better monetize the investment, including offering services to non-Boeing entities). I think Renton datacenter possibly largest in the world (with 360/65s arriving faster than they could be installed). Lots of politics between Renton director and CFO, who only has 360/30 up at Boeing Field for payroll (although they enlarge the machine room to install 360/67 for me to play with when I'm not doing other stuff). When I graduate, I join IBM (CSC) instead of staying at Boeing.

I didn't know about gov. agencies using my code until after joining IBM and asked to teach computer/security classes in DC area. IBM then got a new CSO (came from gov. service, had been head of presidential detail) and I was asked to run around with him some and talk about computer security (and a little bit of physical security rubs off).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

some recent MPIO:
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2023.html#5 1403 printer
https://www.garlic.com/~lynn/2023.html#22 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security

a couple other mentioning Boeing Computer Services
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2023.html#57 Almost IBM class student

some recent mentioning running around with new CSO
https://www.garlic.com/~lynn/2022.html#57 Computer Security
https://www.garlic.com/~lynn/2022b.html#38 Security
https://www.garlic.com/~lynn/2022c.html#4 Industrial Espionage
https://www.garlic.com/~lynn/2022e.html#98 Enhanced Production Operating Systems II
https://www.garlic.com/~lynn/2022f.html#14 The January 6 Secret Service Text Scandal Turns Criminal
https://www.garlic.com/~lynn/2022h.html#75 Researchers found security pitfalls in IBM's cloud infrastructure

--
virtualization experience starting Jan1968, online at home since Mar1970

Classified Material and Security

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Classified Material and Security
Date: 28 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#56 Classified Material and Security

IBM had Registered Confidential ... every page had large faded red number of the copy ... each copy was registered to specific person ... and required to keep under double lock ... and periodically audited by site security.

circa 1980, IBM brought trade-secret lawsuit against disk clone maker for couple billion dollars ... for having acquired detailed unannounced new (3380) disk drive documents. Judge ruled that IBM had to show security proportional to risk ... or "security proportional to value" ... i.e. temptation for normal person finding something not adequately protected and selling it for money ... couldn't be blamed (analogous to requiring fences around swimming pools because children couldn't be expected to not jump in unprotected pool). fences around plant site, guards at gate checking people, access control for buildings, access control inside high security rooms inside buildings, frequent employee education about protecting corporate information, etc.

In that time frame, I had full set of all the ("811" for nov1978 pub date) "Registered Confidential" documents for unannounced/unshipped 370/xa architecture (which would eventually show up w/3081).

For some reason, I got a call from head hunter asking me to interview for job of technical assistant to president of clone 370 processor company (that resold machines manufactured on the other side of the pacific). During the inter viewthere were hints dropped that they were interested in new 370/xa architecture. I politely mentioned that I had recently submitted changes to IBM Employee ethics booklet because I didn't think that it had strong enough ethics guidelines. The interview ended shortly later ... and I never heard from them again.

scurity proportional to risk posts
https://www.garlic.com/~lynn/submisc.html#security.proportional.to.risk

Later the overseas company was involved in federal court case about industrial espionage and because I was listed on the US company's lobby checkin, I had 3hr interview with FBI agent. Afterwards, I wondered who had leaked the information (that I had full copy of the XA documents) ... that likely resulted in the job interview (and later visit by FBI, possibly somebody from site security?).

unrelated: did periodically play disk engineer ... some recent refs:
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

archived posts mentioning playing disk engineer
https://www.garlic.com/~lynn/subtopic.html#disk

posts mentioning "811" documents
https://www.garlic.com/~lynn/2022h.html#69 Fred P. Brooks, 1931-2022
https://www.garlic.com/~lynn/2012c.html#29 5 Byte Device Addresses?
https://www.garlic.com/~lynn/2012b.html#66 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2011n.html#31 big-little
https://www.garlic.com/~lynn/2011g.html#12 Clone Processors
https://www.garlic.com/~lynn/2011g.html#8 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
https://www.garlic.com/~lynn/2011f.html#50 Dyadic vs AP: Was "CPU utilization/forecasting"
https://www.garlic.com/~lynn/2011f.html#46 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#39 At least two decades back, some gurus predicted that mainframes would disappear in future and it still has not happened
https://www.garlic.com/~lynn/2011f.html#20 New job for mainframes: Cloud platform
https://www.garlic.com/~lynn/2011c.html#67 IBM Future System
https://www.garlic.com/~lynn/2006u.html#61 Why these original FORTRAN quirks?

--
virtualization experience starting Jan1968, online at home since Mar1970

Boyd & IBM "Wild Duck" Discussion

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Boyd & IBM "Wild Duck" Discussion
Date: 29 Jan, 2023
Blog: Facebook
Boyd & IBM "Wild Duck" Discussion
https://www.linkedin.com/pulse/boyd-ibm-wild-duck-discussion-lynn-wheeler/
2014 Linkedin Wild Duck archived posts
https://www.garlic.com/~lynn/2014b.html#93 Maximizing shareholder value: The Goal that changed corporate America
https://www.garlic.com/~lynn/2014b.html#97 Where does the term Wild Duck come from?
https://www.garlic.com/~lynn/2014b.html#98 How to groom a leader?
https://www.garlic.com/~lynn/2014b.html#105 Happy 50th Birthday to the IBM Cambridge Scientific Center
https://www.garlic.com/~lynn/2014c.html#52 First 2014 Golden Goose Award to physicist Larry Smarr
https://www.garlic.com/~lynn/2014c.html#53 Not Wild Ducks but Wild Geese - The history behind the story
https://www.garlic.com/~lynn/2014d.html#8 Microsoft culture must change, chairman says
https://www.garlic.com/~lynn/2014h.html#33 Can Ginni really lead the company to the next great product line?
https://www.garlic.com/~lynn/2014h.html#79 EBFAS
https://www.garlic.com/~lynn/2014i.html#7 You can make your workplace 'happy'
https://www.garlic.com/~lynn/2014l.html#56 This Chart From IBM Explains Why Cloud Computing Is Such A Game-Changer

more recent John Boyd and IBM Wild Ducks
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
a few archived posts
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#25 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#19 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022g.html#24 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#67 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#60 John Boyd and IBM Wild Ducks

showed up today in facebook 29jan2014 memory:
From (linkedin) IBM "Wild Duck" discussion:

Command Culture: Officer Education in the U.S. Army and the German Armed Forces:
https://www.amazon.com/Command-Culture-Education-1901-1940-Consequences-ebook/dp/B009K7VYLI/
German junior officers were regularly asked for their opinions and they would criticize the outcome of a large maneuver with several divisions before the attending general had the floor. The American army culture in contrast has historically had a great problem with dissenters and mavericks and just speaking one's mind to a superior officer, disagreeing with or criticizing him could easily break a career.

from "Computer Wars: The Post-IBM World" Ferguson & Morris on failure of Future System:

... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with sycophancy and make no waves under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat

... and:

But because of the heavy investment of face by the top management, F/S took years to kill, although its wrongheadedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

Command Culture: Officer Education in the U.S. Army and the German Armed Forces:

As a young officer, Dwight D. Eisenhower wrote an article favoring mechanization of the cavalry.87 The article displeased the chief of infantry greatly and Ike was commanded not only to cease such heretical activities but also to publicly reverse his opinion. He was threatened with a court-martial.88 His superiors expected a fellow officer to become a sycophant.

... snip ...

One of Boyd's briefings at IBM was Organic Design for Command and Control ... which ends with what is really needed is "Appreciation and Leadership". Part of the briefings was observation that US corporate culture was becoming contaminated by former military officers climbing corporate ladder.


... snip ...

and then 29Jan2019:
More recent

Fingertip Feeling: A Synthesis of Ideas from Maneuver Warfare, 4GW, the OODA-loop, and Ender's Game
https://medium.com/@jamieschwandt/fingertip-feeling-a-synthesis-of-ideas-from-maneuver-warfare-4gw-the-ooda-loop-and-enders-game-4d6c15685f8e

Boyd has coup d'oeil and fingerspitszengefuhl with intuition and instinct ... plausibly from Course Of Instruction In Strategy, Fortification, Tactics Of Battles, & C.; Embracing The Duties Of Staff, Infantry (Henry Wager Halleck)
http://www.amazon.com/Elements-Instruction-Fortification-Embracing-ebook/dp/B002RKSO9K
loc5019-20:

A rapid coup d'oeil prompt decision, active movements, are as indispensable as sound judgment; for the general must see, and decide, and act, all in the same instant.

... snip ...

Could claim that Boyd takes see/decide/act ... replaces see with observe ... and then adds internal brain processing ... putting observation in context (orientate, learning, knowledge).

There is an AI book from the 90s, where one of the "fathers" of AI (dating back to 60s) in intro says that all AI up until then had been done wrong because it wasn't placing information in "context".

In briefings, Boyd would also emphasize observing from every possible facet ... countermeasure to biases ... observation, orientation, confirmation. cognitive. etc biases Also for many people OODA-LOOP implies step-by-step sequential, rather than all parts running simultaneously and asynchronously.

Late 70s, there was multuser space war game deployed on the internal network and within a few weeks there were "bot" players beating humans. Game was upgraded for nonlinear increase in power use for move intervals below human threshold ... putting things on (partial) level playing field

This is also pretty much how Boyd would describe fly-by-wire for F16 ... human reflexes aren't fast enough.

AI then was more "expert systems", capture sets of rules (from some domain expert) ... akin to Muth's "school solutions" in "Command Culture"
https://www.amazon.com/gp/aw/d/B009K7VYLI/
(contrasted to exercises formulating solutions; with compare&contrast).

... not having words/concepts (including intuition/instinct)

How Toyota Turns Workers Into Problem Solvers
http://hbswk.hbs.edu/item/how-toyota-turns-workers-into-problem-solvers

To paraphrase one of our contacts, he said, "It's not that we don't want to tell you what TPS is, it's that we can't. We don't have adequate words for it. But, we can show you what TPS is."

We've observed that Toyota, its best suppliers, and other companies that have learned well from Toyota can confidently distribute a tremendous amount of responsibility to the people who actually do the work, from the most senior, experienced member of the organization to the most junior. This is accomplished because of the tremendous emphasis on teaching everyone how to be a skillful problem solver.

... snip ...

and "school solutions" versus innovation

The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations
https://www.amazon.com/Starfish-Spider-Unstoppable-Leaderless-Organizations-ebook/dp/B000S1LU3M/
pg186/loc2059-62:

Experts tried to explain why Toyota plants were able to produce a high-quality product and foster efficient teamwork while GM's were not. Some speculated that GM's problems arose from the growing power of unions. Others, including Drucker, attributed the Japanese success to cultural differences. The Japanese, he said, had "come to accept my position that the end of business is not 'to make money."

pg191/loc2120-25:

Toyota occupied the decentralized sweet spot in the automotive industry. Had it centralized its assembly line to mirror GM's, it would have taken power away from employees and reduced vehicle quality. But on the other hand, had Toyota decentralized too far--doing away with structure and controls and, say, letting each circle work on whatever car it felt like--the company would have had a mess on its hands. Decentralization brings out creativity, but it also creates variance. One Toyota circle might very well make a wonderful automobile, while another might produce a junker. The sweet spot that Toyota found has enough decentralization for creativity, but sufficient structure and controls to ensure consistency.

... snip ...

some four decades ago, congress passes import quotas on foreign autos, reducing competition, and enormously increasing profits with expectation that they would use the money to remake themselves ... however they just pocketed the money and continued busy as usual. As a result, early 80s, there was call for 100% unearned profit tax on the US auto industry.

Then 1990, the industry has the C4 taskforce to (finally?) look at completely remaking themselves and because they were planning on heavily leveraging technology, they invited major technology vendors to send representatives (offline I would chide some of the computer "big iron" people how could the contribute since the suffered from similar problems). One of the issues was the US industry was taking 7-8yrs from start to rolling off the line, with two efforts offset 3-4 years (so it looks like something is coming more frequently, with cosmetic changes in between). Toyota had cut that time in half in the 80s and in 1990 was in the process of cutting it in half again (18-24months) ... allowing it to adapt more quickly to new technologies and changing consumer preferences.

from more recent auto bailouts, they weren't successful in make-over ... too many stakeholders and vested interests trying to preserve status quo.

Amazon having trouble with balance of centralized and decentralized (ala toyota) https://www.forbes.com/sites/hbsworkingknowledge/2018/05/21/httpshbswk-hbs-eduitemamazon-vs-whole-foods-when-cultures-collide/ <gone 404, but lives on at wayback machine>
https://web.archive.org/web/20180731091440/https://www.forbes.com/sites/hbsworkingknowledge/2018/05/21/httpshbswk-hbs-eduitemamazon-vs-whole-foods-when-cultures-collide/


... snip ...

archived Boyd posts
https://www.garlic.com/~lynn/subboyd.html
1990 auto C4 taskforce
https://www.garlic.com/~lynn/submisc.html#auto.c4.taskforce --
virtualization experience starting Jan1968, online at home since Mar1970

Software Process

From: Lynn Wheeler <lynn@garlic.com>
Subject: Software Process
Date: 30 Jan, 2023
Blog: Facebook
from (silicon valley) "real programmers"
Real Programmers never work 9 to 5. If any real programmers are around at 9am, it's because they were up all night.

... snip ...

... for the 10% that do 90% of the work, they have to concentrate on complex issues and interruptions destroy that concentration (they also are proficient in programming languages, analogous to natural language proficiency, think&dream in the language w/o needing translation) ... also applies to cubicles & "open offices"

Google got it wrong. The open-office trend is destroying the workplace.
https://www.washingtonpost.com/posteverything/wp/2014/12/30/google-got-it-wrong-the-open-office-trend-is-destroying-the-workplace/

real programmers and/or "open office" posts
https://www.garlic.com/~lynn/2022h.html#90 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022d.html#28 Remote Work
https://www.garlic.com/~lynn/2021d.html#49 Real Programmers and interruptions
https://www.garlic.com/~lynn/2019.html#6 Fwd: It's Official: Open-Plan Offices Are Now the Dumbest Management Fad of All Time | Inc.com
https://www.garlic.com/~lynn/2018b.html#56 Computer science hot major in college (article)
https://www.garlic.com/~lynn/2017i.html#53 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017h.html#27 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2016f.html#19 And it's gone --The true cost of interruptions
https://www.garlic.com/~lynn/2016d.html#72 Five Outdated Leadership Ideas That Need To Die
https://www.garlic.com/~lynn/2015b.html#15 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2014m.html#167 Is true that a real programmer would not stoop to wasting machine capacity to do the assembly?
https://www.garlic.com/~lynn/2014m.html#156 Is true that a real programmer would not stoop to wasting machine capacity to do the assembly?
https://www.garlic.com/~lynn/2014m.html#152 Is true that a real programmer would not stoop to wasting machine capacity to do the assembly?
https://www.garlic.com/~lynn/2014m.html#139 Is true that a real programmer would not stoop to wasting machine capacity to do the assembly?
https://www.garlic.com/~lynn/2014f.html#89 Real Programmers
https://www.garlic.com/~lynn/2014.html#24 Scary Sysprogs and educating those 'kids'
https://www.garlic.com/~lynn/2014.html#23 Scary Sysprogs and educating those 'kids'
https://www.garlic.com/~lynn/2013m.html#16 Work long hours (Was Re: Pissing contest(s))
https://www.garlic.com/~lynn/2011l.html#5 computer bootlaces
https://www.garlic.com/~lynn/2011e.html#22 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory
https://www.garlic.com/~lynn/2010c.html#88 search engine history, was Happy DEC-10 Day
https://www.garlic.com/~lynn/2008h.html#83 Java; a POX
https://www.garlic.com/~lynn/2007j.html#7 Newbie question on table design
https://www.garlic.com/~lynn/2007f.html#35 "MVS Experience"
https://www.garlic.com/~lynn/2006b.html#11 IBM 610 workstation computer
https://www.garlic.com/~lynn/2005j.html#24 Public disclosure of discovered vulnerabilities
https://www.garlic.com/~lynn/2004q.html#43 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004p.html#24 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004b.html#35 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2003j.html#43 An a.f.c bibliography?
https://www.garlic.com/~lynn/2003b.html#58 When/why did "programming" become "software development?"
https://www.garlic.com/~lynn/2002o.html#72 So I tried this //vm.marist.edu stuff on a slow Sat. night,
https://www.garlic.com/~lynn/2002o.html#69 So I tried this //vm.marist.edu stuff on a slow Sat. night,
https://www.garlic.com/~lynn/2002e.html#39 Why Use *-* ?
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM (FE) Retain

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM (FE) Retain
Date: 30 Jan, 2023
Blog: Facebook
The author of VMSG (email client, also used by PROFS) also implemented parasite/story ... CMS app that leveraged VM370 PVM psuedo 3270 (there was also a PVM->CCDN gateway) and a HLLAPI-like programming language. Old archive post with a "story" that implements an automated put bucket retriever (aka automated log into retain and download)
https://www.garlic.com/~lynn/2001k.html#36
other automated "story"
https://www.garlic.com/~lynn/2001k.html#35

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing to deliver last 747, the plane that democratized flying

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing to deliver last 747, the plane that democratized flying
Date: 31 Jan, 2023
Blog: Facebook
Boeing to deliver last 747, the plane that democratized flying
https://techxplore.com/news/2023-01-boeing-plane-democratized-flying.html

At end of semester taking two credit hr intro to fortran/computers, I was hired to rewrite 1401 MPIO in 360/30 assembler. Univ had been sold 360/67 (for tss/360) to replace 709/1401. 360/30 was brought in temporarily (which had 1401 emulation), to replace 1401 (pending arrival 360/67). Then within a year of taking intro class, the 360/67 had come in and I was hired fulltime responsible for os/360 (tss/360 never came to production fruition and so 360/67 ran as 360/65 with os/360).

Then before I graduate, I'm hired fulltime into small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing in independent business to better monetize the investment, including offering services to non-Boeing entities). I thot Renton datacenter was possibly largest in the world, couple hundred million $$$, 360/65s arriving faster than they could be installed; boxes constantly staged in hallways around the machine room. Lots of politics between Renton director and CFO who only had 360/30 for payroll up at Boeing field (although they enlarge the machine room for 360/67 for me to play with, when I'm not doing other stuff).

747-3 was flying skies of Seattle getting FAA flt. certification. There was mockup of 747 cabin just south of Boeing field. Tours would claim that 747 would always be served by at least four jetways for passenger load/unload (because of so many people).

1st heavy and had computer auto landing ... end of seatac runway started to crack because 747s were always landing within a couple ft every time. Claim is they then started slightly varying approach glide slope signal

IBM Downfall posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some posts mentioning Boeing, "M/D" take-over, Renton datacenter
https://www.garlic.com/~lynn/2022d.html#91 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022b.html#117 Downfall: The Case Against Boeing
https://www.garlic.com/~lynn/2022.html#109 Not counting dividends IBM delivered an annualized yearly loss of 2.27%
https://www.garlic.com/~lynn/2021f.html#78 The Long-Forgotten Flight That Sent Boeing Off Course
https://www.garlic.com/~lynn/2021f.html#57 "Hollywood model" for dealing with engineers
https://www.garlic.com/~lynn/2021b.html#40 IBM & Boeing run by Financiers
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019e.html#151 OT: Boeing to temporarily halt manufacturing of 737 MAX
https://www.garlic.com/~lynn/2019d.html#2 Rise and Fall of IBM

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing to deliver last 747, the plane that democratized flying

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing to deliver last 747, the plane that democratized flying
Date: 31 Jan, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying

SJC was also 1st place I saw multiple flt numbers & "change of equipment" ... twa parked some planes overnight in san jose (because it was cheaper than SFO). Early morning the same flt took off for stop in SFO and then on to both Seattle and Kennedy (aka one of the flt numbers had "change of equipment" in SFO). Eventual explanation was paper OAG listed direct flts before connecting flts ... so the SJC flt appeared as "direct" to both Seattle and Kennedy (even tho there was a "change of equipment" for one of them in SFO ... aka a connecting flt by any other name ... gaming the paper OAG).

Many years later (after leaving IBM) ... i was asked into largest airline res system to look at the ten impossible things that they couldn't do. I was given machine readable of every commercial airline flt segment in the world ... and couple of months came back with rewrite that did all ten impossible things. I found one flt (I think it was Honolulu to LAX) that appeared to be listed with 13 different flt numbers (same airline, same take-off, same landing). Then I was told reality, they hadn't really wanted me to do it (they just wanted to tell the parent board that I was working on it) ... in order to do the ten impossible things ... I had to automate some processes that were being performed by several hundred people

some past refs:
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2021f.html#8 Air Traffic System
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2016g.html#44 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016g.html#38 LIFE magazine 1945 "Thinking machines" predictions
https://www.garlic.com/~lynn/2016f.html#109 Airlines Reservation Systems
https://www.garlic.com/~lynn/2016e.html#93 Delta Outage
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2014i.html#53 transactions, was There Is Still Hope
https://www.garlic.com/~lynn/2014g.html#101 Costs of core
https://www.garlic.com/~lynn/2014g.html#54 Has the last fighter pilot been born?
https://www.garlic.com/~lynn/2014e.html#10 Can the mainframe remain relevant in the cloud and mobile era?
https://www.garlic.com/~lynn/2013n.html#0 'Free Unix!': The world-changing proclamation made 30yearsagotoday
https://www.garlic.com/~lynn/2013.html#7 From build to buy: American Airlines changes modernization course midflight
https://www.garlic.com/~lynn/2013.html#1 IBM Is Changing The Terms Of Its Retirement Plan, Which Is Frustrating Some Employees
https://www.garlic.com/~lynn/2012n.html#59 history of Programming language and CPU in relation to each
https://www.garlic.com/~lynn/2012e.html#70 Disruptive Thinkers: Defining the Problem
https://www.garlic.com/~lynn/2011n.html#92 Innovation and iconoclasm
https://www.garlic.com/~lynn/2010n.html#81 Hashing for DISTINCT or GROUP BY in SQL
https://www.garlic.com/~lynn/2009l.html#55 IBM halves mainframe Linux engine prices
https://www.garlic.com/~lynn/2008j.html#32 CLIs and GUIs
https://www.garlic.com/~lynn/2007n.html#8 nouns and adjectives
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins
https://www.garlic.com/~lynn/2006o.html#18 RAMAC 305(?)
https://www.garlic.com/~lynn/2006j.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?

--
virtualization experience starting Jan1968, online at home since Mar1970

7090/7044 Direct Couple

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: 7090/7044 Direct Couple
Date: 01 Feb, 2023
Blog: Facebook
7090/7044 Direct Couple
https://bitsavers.org/pdf/ibm/7040/C28-6382-3_IBM_7090-7040_Direct_Couple_Operating_System_Programmers_Guide_Jul66.pdf
http://bitsavers.trailing-edge.com/pdf/ibm/7040/C28-6383-2_IBM_7090-7040_Direct_Couple_Operating_System_Systems_Programmers_Guide_Dec65.pdf

decade_ ago, i was asked to track down decision to make all 370s "virtual memory". Eventually found staff to executive that made decision (basically MVT storage management was so bad that had to specify regions four times larger than used, typical 1mbyte 370/165 then only had enough room to run four regions concurrently, insufficient to keep machine busy and justified; going to 16mbyte virtual memory memory allowed number of concurrent regions to be increased by factor of four times with little or no paging ... VS2/SVS similar to running MVT in a 16mbyte CP67 virtual machine). old archived post with pieces of email exchange
https://www.garlic.com/~lynn/2011d.html#73

also some mention that VS2/MVS originally was supposed to be "glide path" to "Future System" ... more details:
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

some discussion of HASP ... and that 7094/7044 DCS was precursor to ASP (disclaimer, my wife was in the gburg JES group and one of the catchers for JES3 ... also coauthor of "JESUS", JES unified system specification, all the features of JES2 & JES3 that respective customers couldn't live w/o ... never came to fruition for various reasons).

I had taken two credit hr intro fortran/computers and at end of semester was hired to rewrite 1401 MPIO for 360/30 in assembler. Univ had been sold 360/67 for tss/360 to replace 709/1401 ... and 1401 was temporarily replaced with 360/30 (pending availability of 360/67). I got to design and implement my own monitor, device drivers, interrupt handlers, storage management, error recovery, etc. ... within a few weeks I had 2000 card assembler program (univ. shutdown datacenter on weekends and I would have the whole place dedicated, although 48hrs w/o sleep made monday classes hard).

Within year of taking intro class (and after 360/67 came in), I was hired fulltime responsible for os/360 (tss/360 never came to production, so 360/67 ran as 360/65 with os/360, conitnued to have my weekend dedicated time). 709 ibsys tape->tape ran student fortran jobs in under sec. Initially 360/65 os/360 ran them in over a minute. I installed HASP and it cuts time in half. I then did reorged sysgen2 for careful placement of datasets and PDS members (optimize arm seek and PDS directory multi-track search) that cuts it by another 2/3rds to 12.9secs/job. OS/360 student fortran never got better than 709 until I install WATFOR.
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/

HASP, JES, NJE/NJI, etc posts
https://www.garlic.com/~lynn/submain.html#hasp
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

Boeing to deliver last 747, the plane that democratized flying

From: Lynn Wheeler <lynn@garlic.com>
Subject: Boeing to deliver last 747, the plane that democratized flying
Date: 02 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2022h.html#82 Boeing's last 747 to roll out of Washington state factory
https://www.garlic.com/~lynn/2023.html#63 Boeing to deliver last 747, the plane that democratized flying
https://www.garlic.com/~lynn/2023.html#64 Boeing to deliver last 747, the plane that democratized flying

trivia: early 747 experienced some sabotage ... they had built new 747 plant up in Everett ... north of Seattle ... and lots of workers from Renton (south of Seattle) ... and lived in Renton area and/or further south ... were required to commute to new Paine Field plant.

datacenter trivia: There was a disaster plan to duplicate Renton datacenter (which I thot was possibly largest in the world) up at the new 747 plant at Paine field in Everett (north of Seattle) ... scenario was that Mt. Rainier might heat up and the resulting mud slide would take out the Renton datacenter (justification was the cost to Boeing to be w/o Renton datacenter for week or two ... would be more than the cost of duplicating the datacenter).

availability posts
https://www.garlic.com/~lynn/submain.html#available

some past posts mentioning mt. rainier mud slide
https://www.garlic.com/~lynn/2022h.html#4 IBM CAD
https://www.garlic.com/~lynn/2022g.html#63 IBM DPD
https://www.garlic.com/~lynn/2021k.html#55 System Availability
https://www.garlic.com/~lynn/2021e.html#54 Learning PDP-11 in 2021
https://www.garlic.com/~lynn/2021d.html#34 April 7, 1964: IBM Bets Big on System/360
https://www.garlic.com/~lynn/2021.html#78 Interactive Computing
https://www.garlic.com/~lynn/2020.html#45 Watch AI-controlled virtual fighters take on an Air Force pilot on August 18th
https://www.garlic.com/~lynn/2020.html#10 "This Plane Was Designed By Clowns, Who Are Supervised By Monkeys"
https://www.garlic.com/~lynn/2019e.html#153 At Boeing, C.E.O.'s Stumbles Deepen a Crisis
https://www.garlic.com/~lynn/2019d.html#60 IBM 360/67
https://www.garlic.com/~lynn/2019b.html#51 System/360 consoles
https://www.garlic.com/~lynn/2019b.html#38 Reminder over in linkedin, IBM Mainframe announce 7April1964
https://www.garlic.com/~lynn/2018e.html#29 These Are the Best Companies to Work For in the U.S
https://www.garlic.com/~lynn/2018.html#55 Now Hear This--Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2018.html#28 1963 Timesharing: A Solution to Computer Bottlenecks
https://www.garlic.com/~lynn/2017k.html#58 Failures and Resiliency
https://www.garlic.com/~lynn/2017j.html#104 Now Hear This-Prepare For The "To Be Or To Do" Moment
https://www.garlic.com/~lynn/2017g.html#60 Mannix "computer in a briefcase"
https://www.garlic.com/~lynn/2017.html#46 Hidden Figures and the IBM 7090 computer
https://www.garlic.com/~lynn/2016c.html#17 Globalization Worker Negotiation
https://www.garlic.com/~lynn/2016c.html#10 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2015h.html#100 OT: Electrician cuts wrong wire and downs 25,000 square foot data centre
https://www.garlic.com/~lynn/2014e.html#23 Is there any MF shop using AWS service?
https://www.garlic.com/~lynn/2014e.html#19 The IBM Strategy
https://www.garlic.com/~lynn/2014e.html#9 Boyd for Business & Innovation Conference
https://www.garlic.com/~lynn/2013o.html#18 Why IBM chose MS-DOS, was Re: 'Free Unix!' made30yearsagotoday
https://www.garlic.com/~lynn/2013f.html#74 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013.html#7 From build to buy: American Airlines changes modernization course midflight
https://www.garlic.com/~lynn/2012.html#42 Drones now account for one third of U.S. warplanes
https://www.garlic.com/~lynn/2011l.html#37 movie "Airport" on cable
https://www.garlic.com/~lynn/2011h.html#61 Do you remember back to June 23, 1969 when IBM unbundled
https://www.garlic.com/~lynn/2010q.html#59 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010l.html#51 Mainframe Hacking -- Fact or Fiction
https://www.garlic.com/~lynn/2010k.html#18 taking down the machine - z9 series
https://www.garlic.com/~lynn/2008s.html#74 Is SUN going to become x86'ed ??

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM "Green Card"

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM "Green Card"
Date: 02 Feb, 2023
Blog: Facebook
... when I joined the science center ... I got a "blue card" from one of the inventors of GML (i.e. GML name chosen because letters of the three inventors last name). Also, somebody did a rendition of green card in IOS3270 ... I've done a Q&D conversion to HTML
https://www.garlic.com/~lynn/gcard.html

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
CMS script, GML, SGML, HTML, etc posts
https://www.garlic.com/~lynn/submain.html#sgml

360/67 blue card

some past posts mentioning cms ios3270 and green card
https://www.garlic.com/~lynn/2022h.html#101 PSR, IOS3270, 3092, & DUMPRX
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022b.html#86 IBM "Green Card"
https://www.garlic.com/~lynn/2022.html#126 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2022.html#116 On the origin of the /text section/ for code
https://www.garlic.com/~lynn/2021k.html#104 DUMPRX
https://www.garlic.com/~lynn/2021i.html#61 Virtual Machine Debugging
https://www.garlic.com/~lynn/2021i.html#29 OoO S/360 descendants
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021e.html#44 Blank 80-column punch cards up for grabs
https://www.garlic.com/~lynn/2021.html#9 IBM 1403 printer carriage control tape
https://www.garlic.com/~lynn/2019e.html#0 IBM HONE
https://www.garlic.com/~lynn/2018e.html#53 Updated Green Card
https://www.garlic.com/~lynn/2018b.html#6 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#43 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2017.html#98 360 & Series/1
https://www.garlic.com/~lynn/2016d.html#93 Is it a lost cause?
https://www.garlic.com/~lynn/2016c.html#100 IBM's 96 column punch card (was System/3)?
https://www.garlic.com/~lynn/2013o.html#30 GUI vs 3270 Re: MVS Quick Reference, was: LookAT
https://www.garlic.com/~lynn/2013i.html#40 Reader Comment on SA22-7832-08 (PoPS), should I?
https://www.garlic.com/~lynn/2013h.html#27 Getting at the original command name/line
https://www.garlic.com/~lynn/2013h.html#26 Getting at the original command name/line
https://www.garlic.com/~lynn/2013c.html#25 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013c.html#24 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012n.html#64 Should you support or abandon the 3270 as a User Interface?
https://www.garlic.com/~lynn/2012k.html#93 S/360 I/O activity
https://www.garlic.com/~lynn/2012k.html#73 END OF FILE
https://www.garlic.com/~lynn/2012f.html#53 Image if someone built a general-menu-system
https://www.garlic.com/~lynn/2012e.html#55 Just for a laugh... How to spot an old IBMer
https://www.garlic.com/~lynn/2012e.html#52 M68k add to memory is not a mistake any more
https://www.garlic.com/~lynn/2011h.html#18 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2011g.html#70 History of byte addressing
https://www.garlic.com/~lynn/2011f.html#31 TCP/IP Available on MVS When?
https://www.garlic.com/~lynn/2010q.html#33 IBM S/360 Green Card high quality scan
https://www.garlic.com/~lynn/2010l.html#44 PROP instead of POPS, PoO, et al
https://www.garlic.com/~lynn/2010l.html#27 OS idling
https://www.garlic.com/~lynn/2010h.html#72 1130, was System/3--IBM compilers (languages) available?
https://www.garlic.com/~lynn/2010f.html#22 history of RPG and other languages, was search engine history
https://www.garlic.com/~lynn/2010b.html#44 sysout using machine control instead of ANSI control

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and OSS

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM and OSS.
Date: 02 Feb, 2023
Blog: Facebook
23jun1969 unbundling announcement started to charge for maint., SE services, (application) software (made the case that kernel software should still be free), etc. Then "Future System" effort in early 70s was totally different from 370 and was going to completely replace it. Internal politics during the period was killing off 370 efforts and the lack of new 370 during the period is credited with giving 370 clone system makers (like Amdahl) their market foothold. When "FS" imploded there was mad rush to get stuff back into the 370 product pipelines, including kicking off the quick&dirty 3033 & 3081 efforts in parallel. Decision was also to transition to charging for all operating system (kernel) software. more FS details
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

... all during the FS period, I continued to work on 360 and then 370 stuff ... including ridiculing the FS activity ... which wasn't exactly a career enhancing activity.

One of my hobbies after joining IBM was enhanced production operating systems for internal datacenters. In the morph from CP67->VM370, lots of stuff was dropped or significantly simplified (including bunch of stuff I had done as undergraduate and after joining IBM). I spent much of 1974 moving to VM370 and upgrading my internal CSC/VM (for internal datacenters). With the decision to transition to charging for kernel software, a bunch of my stuff for internal datacenters was selected to be the guinea pig ... and I got to spend a great deal of time with lawyers and business people on guidelines for kernel software charging.

The transition to charging for kernel software was pretty well complete by the early 80s ... and the OCO-wars (IBM "object code only") began ... which can be seen in some of the VMSHARE archive ... aka TYMSHARE started offering its CMS-based online computer conferencing (precursor to modern social media) "free" to the mainframe user group, SHARE in Aug1976 ... archive here:
http://vm.marist.edu/~vmshare

23jun1969 unbundling announcement posts
https://www.garlic.com/~lynn/submain.html#unbundle
future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
cambridge science center (& CSC/VM) posts
https://www.garlic.com/~lynn/subtopic.html#545tech csc/vm (&/or sjr/vm) posts
https://www.garlic.com/~lynn/submisc.html#cscvm

some posts mentioning OCO-wars
https://www.garlic.com/~lynn/2022e.html#7 RED and XEDIT fullscreen editors
https://www.garlic.com/~lynn/2022b.html#118 IBM Disks
https://www.garlic.com/~lynn/2022b.html#30 Online at home
https://www.garlic.com/~lynn/2021k.html#121 Computer Performance
https://www.garlic.com/~lynn/2021k.html#50 VM/SP crashing all over the place
https://www.garlic.com/~lynn/2021h.html#55 even an old mainframer can do it
https://www.garlic.com/~lynn/2021g.html#27 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021d.html#2 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2021.html#14 Unbundling and Kernel Software
https://www.garlic.com/~lynn/2018e.html#91 The (broken) economics of OSS
https://www.garlic.com/~lynn/2018d.html#48 IPCS, DUMPRX, 3092, EREP
https://www.garlic.com/~lynn/2018b.html#6 S/360 addressing, not Honeywell 200
https://www.garlic.com/~lynn/2018.html#43 VSAM usage for ancient disk models
https://www.garlic.com/~lynn/2017j.html#16 IBM open sources it's JVM and JIT code
https://www.garlic.com/~lynn/2017g.html#101 SEX
https://www.garlic.com/~lynn/2017g.html#23 Eliminating the systems programmer was Re: IBM cuts contractor billing by 15 percent (our else)
https://www.garlic.com/~lynn/2017e.html#18 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017.html#59 The ICL 2900
https://www.garlic.com/~lynn/2016g.html#68 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016f.html#32 z/OS Operating System size
https://www.garlic.com/~lynn/2016f.html#26 British socialism / anti-trust
https://www.garlic.com/~lynn/2016b.html#18 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2015h.html#38 high level language idea
https://www.garlic.com/~lynn/2015h.html#32 (External):Re: IBM
https://www.garlic.com/~lynn/2015d.html#59 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#48 Western Union envisioned internet functionality
https://www.garlic.com/~lynn/2015d.html#14 3033 & 3081 question
https://www.garlic.com/~lynn/2015b.html#19 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2015.html#85 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2015.html#84 a bit of hope? What was old is new again
https://www.garlic.com/~lynn/2014m.html#35 BBC News - Microsoft fixes '19-year-old' bug with emergency patch
https://www.garlic.com/~lynn/2014i.html#5 "F[R]eebie" software
https://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM and OSS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM and OSS.
Date: 03 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#68 IBM and OSS

... all during the FS period, I continued to work on 360 and then 370 stuff ... including periodically ridiculing the FS activity ... which wasn't exactly a career enhancing activity.

the 70s Future System disaster, from Ferguson Morris, "Computer Wars: The Post-IBM World", Time Books, 1993 ....
http://www.amazon.com/Computer-Wars-The-Post-IBM-World/dp/1587981394
... and perhaps most damaging, the old culture under Watson Snr and Jr of free and vigorous debate was replaced with *SYNCOPHANCY* and *MAKE NO WAVES* under Opel and Akers. It's claimed that thereafter, IBM lived in the shadow of defeat ... But because of the heavy investment of face by the top management, F/S took years to kill, although its wrong headedness was obvious from the very outset. "For the first time, during F/S, outspoken criticism became politically dangerous," recalls a former top executive.

... snip ...

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent posts mentioning no career, no promotions, no raises
https://www.garlic.com/~lynn/2022e.html#100 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#59 IBM CEO: Only 60% of office workers will ever return full-time
https://www.garlic.com/~lynn/2021k.html#107 IBM Future System
https://www.garlic.com/~lynn/2021k.html#59 IBM Mainframe
https://www.garlic.com/~lynn/2021c.html#40 Teaching IBM class

--
virtualization experience starting Jan1968, online at home since Mar1970

GML, SGML, & HTML

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: GML, SGML, & HTML
Date: 03 Feb, 2023
Blog: Facebook
some of the MIT 7094/CTSS people went to Project Mac on the 5th flr to do Multics. Others went to the IBM Cambridge Science Center on the 4th flr and did CP40&CP67 (precursor to VM370), internal network, lots of online and performance/throughput applications, etc. The CTSS Runoff was redone for CMS as "SCRIPT". Then in 1969, GML was invented at the science center (name chosen for 1st letters of inventors last name) and GML tag processing was added to CMS SCRIPT. One of the first mainstream IBM documents done in SCRIPT was the 370 Architecture manual (called "REDBOOK" for distribution in red 3-ring binders). SCRIPT command line option would either print the full REDBOOK, or the 370 Principles of Operation subset.

After a decade, GML morphs into ISO standard SGML
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm
After another decade it morphs into HTML at CERN. The first HTML webserver in the US was on the Stanford SLAC (CERN sister institution) VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

post about co-worker at science center responsible for internal network (and internet lore)
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
GML/SGML posts
https://www.garlic.com/~lynn/submain.html#sgml
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

some posts mentioning 370 redbook, 370 architecture, & 370 principles of operation
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#51 IBM History
https://www.garlic.com/~lynn/2022.html#71 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2014c.html#56 Computer Architecture Manuals - tools for writing and maintaining- state of the art?
https://www.garlic.com/~lynn/2010k.html#41 Unix systems and Serialization mechanism
https://www.garlic.com/~lynn/2007i.html#31 Latest Principles of Operation
https://www.garlic.com/~lynn/2006s.html#53 Is the teaching of non-reentrant HLASM coding practices ever defensible?
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 03 Feb, 2023
Blog: Facebook
4341 looked more like an office credenza
https://web.archive.org/web/20190105032753/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP4341.html

bldg15 got an engineering 4341 (proc cycle time was slowed down 10-20% compared to what production models would be) in Jan1979 for disk testing and I was con'ed into doing benchmarks for national lab that was looking at getting 70 for compute farm (sort of (b)leading edge of the coming cluster supercomputing tsunami). Cluster of 5 production models had more throughput, less expensive, smaller footprint, less power and cooling than 3033. Then large companies were making orders for hundreds of vm4341s at a time for placing out in departmental areas (sort of (b)leading edge of the coming distributed computing tsunami).

One of the issues was MVS wanted to play in the (exploding) departmental distributed computing market ... but the only new non-datacenter disks were FBA (3370) and MVS didn't have FBA support. Eventually 3370 CKD emulation was shipped as 3375. However, it didn't do MVS a lot of good, customers were looking at having dozens or scores of VM/4341s systems per support person, while MVS still had large number of support persons per MVS system. Note inside IBM, departmental conference rooms started to be in short supply because so many were being converted to departmental vm4341 distributed systems.

DASD, CKD, FBA, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

trivia: when I first transferred to San Jose Research, I got to wander around a lot of IBM & non-IBM datacenters ... including bldg14 (disk engineering) and bldg15 (disk product test) across the street. They were still doing prescheduled, around-the-clock, stand-alone testing. They said that they had recently tried MVS ... but it had 15min mean-time-between failure (in that enviroment) requiring manual re-ipl. I offered to rewrite the I/O supervisor making it bullet proof and never fail ... enabling any amount of ondemand concurrent testing, greatly improving productivity. I then wrote ibm internal research paper on the work and happen to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head. Later when 3380s were getting ready to ship, FE had 57 simulated errors they felt were likely to happen ... and MVS was crashing in all cases ... and in 2/3rds of the cases there was no indication of what cause the failure. Other trivia: bldg15 found with a little tweaking of 4341 microcode that they could do 3380 3mbyte/sec datastreaming testing.

getting to play disk engineer in bldg 14&15 posts
https://www.garlic.com/~lynn/subtopic.html#disk

In SJR I was also doing some work for System/R (original SQL/relational implementation) and BofA signed up for joint study with 60 distributed vm/4341s deployed for branches. The corporation was totally preoccupied with EAGLE for the follow-on to IMS ... so it was possibly to do tech transfer ("under the radar") to Endicott for SQL/DS (later when EAGLE implodes, there is request for how fast could System/R be ported to MVS ... which is eventually announced as DB2 ... originally for decision support only). One of the leading people behind System/R departs for Tandem ... trying to palm off a bunch of stuff on me.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

IBM SJR had done internal 4341 VM SSI with 3088/trotter (8 channel CTCA) ... that could do cluster serialization operations in well under a second elapsed time. Then the communication group said that they would veto it shipping to customers unless their cluster operations ran on top of VTAM. Then the cluster operations that ran well under a second were taking half a minute.

Early CP67 under CP67 under CP67 was joint distributed development project with Endicott for initial 370 virtual memory development. CP67 (360/67) was modified to support virtual machine with 370 virtual memory architecture ("CP67H"). Then CP67H was modified to run with 370 virtual memory architecture (instead of 360/67 architecture) and would run in CP67H 370 virtual machine (aka "CP67I"). "CP67I" was in regular production use a year before any real 370 virtual memory hardware and also used for the initial test of engineering 370/145 supporting virtual memory. In Cambridge, the production CP67L system ran on the real 360/67. "CP67H" ran in a "CP67L" 306/67 virtual machine (in part because 370 virtual memory hadn't been announced and needed to be isolated from staff, students, and professors from Boston area institutions also using the cambridge machine), "CP67I" then ran in "CP67H" 370 virtual machine. Then CMS ran in a "CP67I" 370 virtual machine. Later three people came out from San Jose and implemented 3330 and 2305 device support in CP67I ... became CP67SJ ... which would be run on internal IBM real 370 hardware machines both before and after 370 virtual memory was announced (lots of places were still using it after VM370 became available).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

National lab benchmark was Fortran "RAIN" that came from the 60s and ran on CDC6600. They were looking for 70 4341s for compute farm (leading edge of cluster supercomputing tsunami) ... It ran in 35.77secs on CDC6600 and it ran in 36.13secs on engineering 4341 (the engineering 4341 had processor cycle reduced by 10-20% from what would ship in production machines).

posts mentioning national lab 4341 benchmarks
https://www.garlic.com/~lynn/2022g.html#90 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022f.html#89 CDC6600, Cray, Thornton
https://www.garlic.com/~lynn/2021j.html#94 IBM 3278
https://www.garlic.com/~lynn/2021j.html#52 ESnet
https://www.garlic.com/~lynn/2019c.html#49 IBM NUMBERS BIPOLAR'S DAYS WITH G5 CMOS MAINFRAMES
https://www.garlic.com/~lynn/2018d.html#42 Mainframes and Supercomputers, From the Beginning Till Today
https://www.garlic.com/~lynn/2018b.html#49 Think you know web browsers? Take this quiz and prove it
https://www.garlic.com/~lynn/2016h.html#51 Resurrected! Paul Allen's tech team brings 50-year -old supercomputer back from the dead
https://www.garlic.com/~lynn/2016h.html#44 Resurrected! Paul Allen's tech team brings 50-year-old supercomputer back from the dead
https://www.garlic.com/~lynn/2016e.html#116 How the internet was invented
https://www.garlic.com/~lynn/2015h.html#71 Miniskirts and mainframes
https://www.garlic.com/~lynn/2014j.html#37 History--computer performance comparison chart
https://www.garlic.com/~lynn/2014c.html#61 I Must Have Been Dreaming (36-bit word needed for ballistics?)
https://www.garlic.com/~lynn/2013c.html#53 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013.html#38 DEC/PDP minicomputers for business in 1968?
https://www.garlic.com/~lynn/2011d.html#40 IBM Watson's Ancestors: A Look at Supercomputers of the Past
https://www.garlic.com/~lynn/2011c.html#65 Comparing YOUR Computer with Supercomputers of the Past
https://www.garlic.com/~lynn/2009r.html#37 While watching Biography about Bill Gates on CNBC last Night
https://www.garlic.com/~lynn/2009d.html#54 mainframe performance
https://www.garlic.com/~lynn/2006y.html#21 moving on
https://www.garlic.com/~lynn/2006x.html#31 The Future of CPUs: What's After Multi-Core?
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2002i.html#22 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2001d.html#67 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?

posts mentioning CP67L, CP67H, CP67I, and CP67SJ
https://www.garlic.com/~lynn/2022h.html#22 370 virtual memory
https://www.garlic.com/~lynn/2022e.html#94 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022e.html#80 IBM Quota
https://www.garlic.com/~lynn/2022d.html#59 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#12 Programming Skills
https://www.garlic.com/~lynn/2021g.html#34 IBM Fan-fold cards
https://www.garlic.com/~lynn/2021g.html#6 IBM 370
https://www.garlic.com/~lynn/2021c.html#5 Z/VM
https://www.garlic.com/~lynn/2019b.html#28 Science Center
https://www.garlic.com/~lynn/2017.html#87 The ICL 2900
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2011b.html#69 Boeing Plant 2 ... End of an Era
https://www.garlic.com/~lynn/2010e.html#23 Item on TPF
https://www.garlic.com/~lynn/2010b.html#51 Source code for s/360
https://www.garlic.com/~lynn/2009s.html#17 old email
https://www.garlic.com/~lynn/2007i.html#16 when was MMU virtualization first considered practical?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 03 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341

Recent post references old email about 85/165/168/3033/3090 being same machine with slight tweaks (also references that Endicott was moving up from the mid-range)
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/

reference to canceling Amdahl's ACS/360, executives afraid that it would advance state of the art too fast and loose control of the market) ... list some features show up more than 20yrs later with ES/9000 (after ACS/360 canceled, Amdahl leaves IBM).
https://people.cs.clemson.edu/~mark/acs_end.html

trivia: with the demise of FS, there is mad rush to get stuff back into 370 product pipeline, including kicking off 3033&3081 in parallel. 3033 starts out with 168 logic remapped to 20% faster chips. They take the 158 integrated channel microcode for 303x channel director. A 3031 is a 158 engine with just 370 microcode (and no integrated channel microcode) and a 2nd 158 engine with the integrated channel microcode (and no 370 microcode) for 303x channel director. A 3032 is a 168-3 modified to use 303x channel directors (aka 158 engines with integrated channel microcode). Some more here (mostly about demise of FS).
http://www.jfsowa.com/computer/memo125.htm

future system posts
https://www.garlic.com/~lynn/submain.html#futuresys

a couple previous refs to "Part 6" post
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple

copies of the email in these posts
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022e.html#61 Channel Program I/O Processing Efficiency
https://www.garlic.com/~lynn/2021b.html#23 IBM Recruiting
https://www.garlic.com/~lynn/2019c.html#45 IBM 9020

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 04 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341

3340 winchester
https://www.ibm.com/ibm/history/exhibits/storage/storage_PH3340.html

upthread I mention rewriting I/O supervisor for disk engineering (bldg14) and product test (bldg15) to make it bullet proof and never fail so they could switch from stand-alone, prescheduled, around the block, mainframe disk testing to ondemand, any amount of concurrent testing (greatly improving productivity, they had mentioned having tried MVS in that environment and it had 15min MTBF requiring manual re-ipl).

When the engineering 3033 (1st outside pok, #3 or #4) came into bldg15, we setup private online service (disk testing took only percent or two of processing), found a couple strings of 3330 and a 3830 controller (they also ran 3270 coax from the 3033 under the street to my office in SJR/bldg28 ... I had coax switch box between increasing number of systems). At the time, there was somebody doing air bearing simulation (for design of thin-film, floating heads) on the SJR 370/195 ... but even with high priority was only getting a couple turn arounds a month. We set him up on the 3033 ... and even tho it had less than half processing power of 195, he was able to get multiple turn arounds a day. Thin-film, floating heads 1st shipped with 3370 FBA drives:
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3370

mentions only systems that supported FBA was DOS & VM. I had offered the MVS DASD group, FBA support ... but they said I needed to show additional $26M ($200M-$300M in sales) to cover cost of publications and education ... but IBM was already selling every disk it made ... and I couldn't use business case of lifetime savings and productivity. Note all disks were already starting to transition to fixed-block, CKD 3380 formulas for records/track required rounding up record size to block size. There haven't been any CKD disks made for decades, all CKD disks are now simulation on industry standard fixed block disks.

other trivia: Even MVS (not having FBA support) 3090 systems had at least a pair of 3370s for operation
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

posts getting to play disk engineer in bldgs14&15
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, fixed-block, multi-track search, etc posts
https://www.garlic.com/~lynn/submain.html#dasd

3081 trivia: 3033&3081 were quick&dirty efforts kicked off in parallel after the Future System effort imploded. This FS article describes 3081 using warmed over FS technology and likely motivated TCM packaging since the 3081 had such an enormous amount of circuitry (TCM reducing physical space required)
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]

... snip ...

reference to canceling ACS/360 (executives thot it would advance state of the art too fast and would loose control of the market)
https://people.cs.clemson.edu/~mark/acs_end.html
has reference to some ACS/360 features showing up more than 20yrs later with ES/9000

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 04 Feb, 2023
Blog: Facebook
collected comments from a 4341 discussion
https://www.linkedin.com/pulse/ibm4341-lynn-wheeler
and
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341

The 4341 mainframe computer system was introduced by IBM on June 30, 1979. 4341 looked more like an office credenza
https://web.archive.org/web/20190105032753/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP4341.html

Bldg15 got an engineering 4341 in Jan1979 for disk testing and I was con'ed into doing benchmarks for national lab that was looking at getting 70 for compute farm (sort of (b)leading edge of the coming cluster supercomputing tsunami). National lab benchmark was Fortran "RAIN" that came from the 60s and ran on CDC6600; it ran in 35.77secs on CDC6600 and it ran in 36.13secs on engineering 4341 (the engineering 4341 had processor cycle reduced by 10-20% from what would ship in production machines). Cluster of 5 production models had more throughput, less expensive, smaller footprint, less power and cooling than 3033. Then large companies were making orders for hundreds of vm4341s at a time for placing out in departmental areas (sort of (b)leading edge of the coming distributed computing tsunami).

One of the issues was MVS wanted to play in the (exploding) departmental distributed computing market ... but the only new non-datacenter disks were FBA (3370) and MVS didn't have FBA support. Eventually 3370 CKD emulation was shipped as 3375. However, it didn't do MVS a lot of good, customers were looking at having dozens or scores of VM/4341s systems per support person, while MVS still had large number of support persons per MVS system. Note inside IBM, departmental conference rooms started to be in short supply because so many were being converted to departmental vm4341 distributed systems.

trivia: when I first transferred to San Jose Research, I got to wander around a lot of IBM & non-IBM datacenters ... including bldg14 (disk engineering) and bldg15 (disk product test) across the street. They were still doing prescheduled, around-the-clock, stand-alone testing. They said that they had recently tried MVS ... but it had 15min mean-time-between failure (in that environment) requiring manual re-ipl. I offered to rewrite the I/O supervisor making it bullet proof and never fail ... enabling any amount of ondemand concurrent testing, greatly improving productivity. I then wrote ibm internal research paper on the work and happen to mention the MVS 15min MTBF ... bringing down the wrath of the MVS organization on my head. Later when 3380s were getting ready to ship, FE had 57 simulated errors they felt were likely to happen ... and MVS was crashing in all cases ... and in 2/3rds of the cases there was no indication of what cause the failure. Other trivia: bldg15 found with a little tweaking of 4341 microcode that they could do 3380 3mbyte/sec datastreaming testing.

ref: winchester (3340)
https://www.ibm.com/ibm/history/exhibits/storage/storage_PH3340.html

When the engineering 3033 (1st outside pok, #3 or #4) came into bldg15, we setup private online service (disk testing took only percent or two of processing), found a couple strings of 3330 and a 3830 controller (they also ran 3270 coax from the 3033 under the street to my office in SJR/bldg28 ... I had coax switch box between increasing number of systems). At the time, there was somebody doing air bearing simulation (for design of thin-film, floating heads) on the SJR 370/195 ... but even with high priority was only getting a couple turn arounds a month. We set him up on the 3033 ... and even tho it had less than half processing power of 195, he was able to get multiple turn arounds a day. Thin-film, floating heads 1st shipped with 3370 FBA drives:
https://en.wikipedia.org/wiki/History_of_IBM_magnetic_disk_drives#IBM_3370

mentions only systems that supported FBA was DOS & VM. I had offered the MVS DASD group, FBA support ... but they said I needed to show additional $26M ($200M-$300M in sales) to cover cost of publications and education ... but IBM was already selling every disk it made ... and I couldn't use business case of lifetime savings and productivity. Note all disks were already starting to transition to fixed-block, CKD 3380 formulas for records/track required rounding up record size to block size. There haven't been any CKD disks made for decades, all CKD disks are simulation on industry standard fixed block disks.

other trivia: Even MVS (not having FBA support) 3090 systems had at least a pair of 3370s for operation
https://web.archive.org/web/20230719145910/https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

other recent posts mentioning getting to play disk engineer
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/

getting to play disk engineer archived posts
https://www.garlic.com/~lynn/subtopic.html#disk
DASD, CKD, FBA, multi-track archived posts
https://www.garlic.com/~lynn/submain.html#dasd

vm/4341 and distributed computing trivia (large part, non-datacenter, departmental rooms); 1Jan1983 is cutover from ARPANET (approx. 100 IMP network nodes and 255 host computers) to TCP/IP & "internet" ... at a time when the internal network was rapidly approaching 1000 network nodes (in large part the explosion in vm/4341s ... rapid increase eventually exceeding 4000 internal network nodes) ... these were non-SNA. Old post with a list of corporate locations that added one or more internal network nodes during 1983
https://www.garlic.com/~lynn/2006k.html#8

Co-worker at science center responsible for the internal network (also has some of his battles with IBM about moving to TCP/IP)
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/

the technology had also been used for the corporate sponsored univ. BITNET
https://en.wikipedia.org/wiki/BITNET

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

Part of the distributed departmental appeal was that science center (and 60s commercial online csc spinoffs) did for cp67 (eventually migrating to vm370) for darkroom, unattended operation. trivia: In the 60s, IBM rented/leased mainframe, charges based on "system meter" ... which ran when ever the processor and/or any channels were busy. One hack was terminal channel program that would allow channel to stop ... but immediately respond when any characters were arriving. The system meter had to have all processors and channels idle for 400ms before stopping (long after IBM had converted from rent/lease to sales, MVS still had a 400ms timer event that would guarantee the "system meter" would never stop).

online commercial time sharing
https://www.garlic.com/~lynn/submain.html#timeshare

In SJR I was also doing some work for System/R (original SQL/relational implementation) and BofA signed up for joint study with 60 distributed vm/4341s deployed for branches. The corporation was totally preoccupied with EAGLE for the follow-on to IMS ... so it was possibly to do tech transfer ("under the radar") to Endicott for SQL/DS (later when EAGLE implodes, there is request for how fast could System/R be ported to MVS ... which is eventually announced as DB2 ... originally for decision support only). One of the leading people behind System/R departs for Tandem ... trying to palm off a bunch of stuff on me.

system/r posts
https://www.garlic.com/~lynn/submain.html#systemr

IBM SJR had done internal 4341 VM SSI with 3088/trotter (8 channel CTCA) ... that could do cluster serialization operations in well under a second elapsed time. Then the communication group said that they would veto it shipping to customers unless their cluster operations ran on top of VTAM. Then the same cluster operations that ran well under a second were taking half a minute.

Recent post references old email about 85/165/168/3033/3090 being same machine with slight tweaks (also references that Endicott was moving up from the mid-range)
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/

reference to canceling Amdahl's ACS/360, executives afraid that it would advance state of the art too fast and loose control of the market) ... list some features show up more than 20yrs later with ES/9000 (after ACS/360 canceled, Amdahl leaves IBM).
https://people.cs.clemson.edu/~mark/acs_end.html

trivia: with the demise of FS, there is mad rush to get stuff back into 370 product pipeline, including kicking off 3033&3081 in parallel. 3033 starts out with 168 logic remapped to 20% faster chips. They take the 158 integrated channel microcode for 303x channel director. A 3031 is a 158 engine with just 370 microcode (and no integrated channel microcode) and a 2nd 158 engine with the integrated channel microcode (and no 370 microcode) for 303x channel director. A 3032 is a 168-3 modified to use 303x channel directors (aka 158 engines with integrated channel microcode). Some more here (mostly about demise of FS), also enormous circuitry in 3081 plausible motivation for TCMs (cramming into smaller volume)
https://people.computing.clemson.edu/~mark/fs.html
http://www.jfsowa.com/computer/memo125.htm
The 370 emulator minus the FS microcode was eventually sold in 1980 as as the IBM 3081. The ratio of the amount of circuitry in the 3081 to its performance was significantly worse than other IBM systems of the time; its price/performance ratio wasn't quite so bad because IBM had to cut the price to be competitive. The major competition at the time was from Amdahl Systems -- a company founded by Gene Amdahl, who left IBM shortly before the FS project began, when his plans for the Advanced Computer System (ACS) were killed. The Amdahl machine was indeed superior to the 3081 in price/performance and spectaculary superior in terms of performance compared to the amount of circuitry.]

... snip ...

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys

recent post about early CP67, VM370, and getting con'ed into doing the analysis for ECPS for 138/148 ... also there was effort to get corporate to agree to pre-installing VM370 on every 138/148 shipped ... however head of POK was in the process of convincing corporate to kill the vm370 product and transferrng all the people to POK for MVS/XA. Endicott eventually manages to save the vm370 product mission, but had to reconstitute a development group from scratch.
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/

Then 4331/4341 (follow-on to 138/148) also with ECPS. Now early 80s, there was an effort to use 801/RISC for the microprocessor for controllers, low&mid range 370s (4361/4381 follow-on for 4331/4341), AS/400 (follow-on for s/38), etc ... for various reasons all those efforts floundered. I contributed to Endicott white-paper rather than using 801/RISC Iliad chip with 370 "microcode" for 4361/4381 ... that chip technology was reaching the point where nearly all 370 could be implemented directly in silicon. Boeblingen was doing the ROMAN chipset that implemented 370 with performance of 168 (I also had a proposal for cluster 370, see how many ROMAN chipsets I could cram in a rack).

801/risc, iliad, romp, pc/rt, rios, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801

A few years later doing HA/6000, which I rename HA/CMP (High Availability Cluster Multi-Processing) for cluster scale-up, for how many RS/6000 in a rack and how many racks tied together. Then cluster scale-up is transferred for announce as IBM supercomputer (for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors. We leave IBM a few months later.
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing

ha/cmp archived posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

Early CP67 under CP67 under CP67 was joint distributed development project with Endicott for initial 370 virtual memory development. CP67 (360/67) was modified to support virtual machine with 370 virtual memory architecture ("CP67H"). Then CP67H was modified to run with 370 virtual memory architecture (instead of 360/67 architecture) and would run in CP67H 370 virtual machine (aka "CP67I"). "CP67I" was in regular production use a year before any real 370 virtual memory hardware and also used for the initial test of engineering 370/145 supporting virtual memory. In Cambridge, the production CP67L system ran on the real 360/67. "CP67H" ran in a "CP67L" 306/67 virtual machine (in part because 370 virtual memory hadn't been announced and needed to be isolated from staff, students, and professors from Boston area institutions also using the cambridge machine), "CP67I" then ran in "CP67H" 370 virtual machine. Then CMS ran in a "CP67I" 370 virtual machine. Later three people came out from San Jose and implemented 3330 and 2305 device support in CP67I ... became CP67SJ ... which would be run on internal IBM real 370 hardware machines both before and after 370 virtual memory was announced (lots of places were still using it after VM370 became available).

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

A coworker at SJR left IBM and was doing lots of contracting work in silicon valley ... lots of work on HSPICE (fortran chip design) and datacenter support work for senior engineering VP at large VLSI shop. He had reworked AT&T C to 370 with lots of fixes and 370 code optimization ... and then ported the BSD chip tools to 370. One day the marketing rep came in and asked him what he was doing. He said implementing ethernet support so could use SGI graphical workstations as front-ends to the 370s. The marketing rep told him he should be doing token-ring support instead or otherwise the customer might find service won't be as timely as it has been. I almost immediately get an hour phone call loaded with 4-letter words. The next morning the senior engineering VP has press conference and says they are moving everything off 370s to SUN servers. IBM then has bunch of task force analysis to look at why silicon valley wasn't using 370 mainframes (but weren't allowed to consider what the marketing rep had done).

posts in mainframe related sequence
https://www.linkedin.com/pulse/zvm-50th-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-2-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-4-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/
https://www.linkedin.com/pulse/zvm-50th-part-6-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
https://www.linkedin.com/pulse/ibm4341-lynn-wheeler

mostly business
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
https://www.linkedin.com/pulse/boyd-ibm-wild-duck-discussion-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-breakup-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-controlling-market-lynn-wheeler/

ibm downfall, breakup, controlling market archived posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

recent archived posts referencing my IBM &/or tech linkedin articles:
https://www.garlic.com/~lynn/2023.html#70 GML, SGML, & HTML
https://www.garlic.com/~lynn/2023.html#65 7090/7044 Direct Couple
https://www.garlic.com/~lynn/2023.html#60 Boyd & IBM "Wild Duck" Discussion
https://www.garlic.com/~lynn/2023.html#59 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#54 Classified Material and Security
https://www.garlic.com/~lynn/2023.html#52 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#51 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2023.html#49 23Jun1969 Unbundling and Online IBM Branch Offices
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#45 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#43 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2023.html#38 Disk optimization
https://www.garlic.com/~lynn/2023.html#36 IBM changes between 1968 and 1989
https://www.garlic.com/~lynn/2023.html#33 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2023.html#28 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#27 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#25 IBM Punch Cards
https://www.garlic.com/~lynn/2023.html#21 IBM Change
https://www.garlic.com/~lynn/2023.html#20 IBM Change
https://www.garlic.com/~lynn/2023.html#18 PROFS trivia
https://www.garlic.com/~lynn/2023.html#14 360 Announce and then the Future System Disaster
https://www.garlic.com/~lynn/2023.html#12 IBM Marketing, Sales, Branch Offices
https://www.garlic.com/~lynn/2023.html#11 IBM Loses Top Patent Spot After Decades as Leader
https://www.garlic.com/~lynn/2023.html#10 History Is Un-American. Real Americans Create Their Own Futures
https://www.garlic.com/~lynn/2023.html#4 Mainrame Channel Redrive
https://www.garlic.com/~lynn/2023.html#0 AUSMINIUM FOUND IN HEAVY RED-TAPE DISCOVERY
https://www.garlic.com/~lynn/2022h.html#124 Corporate Computer Conferencing
https://www.garlic.com/~lynn/2022h.html#121 IBM Controlling the Market
https://www.garlic.com/~lynn/2022h.html#120 IBM Controlling the Market
https://www.garlic.com/~lynn/2022h.html#118 IBM Breakup
https://www.garlic.com/~lynn/2022h.html#115 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#114 TOPS-20 Boot Camp for VMS Users 05-Mar-2022
https://www.garlic.com/~lynn/2022h.html#108 IBM 360
https://www.garlic.com/~lynn/2022h.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022h.html#105 IBM 360
https://www.garlic.com/~lynn/2022h.html#104 IBM 360
https://www.garlic.com/~lynn/2022h.html#103 IBM 360
https://www.garlic.com/~lynn/2022h.html#102 IBM Pension
https://www.garlic.com/~lynn/2022h.html#99 IBM 360
https://www.garlic.com/~lynn/2022h.html#94 IBM 360
https://www.garlic.com/~lynn/2022h.html#93 IBM 360
https://www.garlic.com/~lynn/2022h.html#92 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022h.html#88 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022h.html#86 Mainframe TCP/IP
https://www.garlic.com/~lynn/2022h.html#84 CDC, Cray, Supercomputers
https://www.garlic.com/~lynn/2022h.html#77 The Internet Is Having Its Midlife Crisis
https://www.garlic.com/~lynn/2022h.html#72 The CHRISTMA EXEC network worm - 35 years and counting!
https://www.garlic.com/~lynn/2022h.html#59 360/85
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2022h.html#57 Christmas 1989
https://www.garlic.com/~lynn/2022h.html#56 Tandem Memos
https://www.garlic.com/~lynn/2022h.html#55 More John Boyd and OODA-loop
https://www.garlic.com/~lynn/2022h.html#47 Computer History, OS/360, Fred Brooks, MMM
https://www.garlic.com/~lynn/2022h.html#43 1973 ARPANET Map
https://www.garlic.com/~lynn/2022h.html#36 360/85
https://www.garlic.com/~lynn/2022h.html#31 IBM OS/360
https://www.garlic.com/~lynn/2022h.html#27 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#26 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#25 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#24 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#21 370 virtual memory
https://www.garlic.com/~lynn/2022h.html#19 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022h.html#17 Arguments for a Sane Instruction Set Architecture--5 years later
https://www.garlic.com/~lynn/2022h.html#16 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#12 Inventing the Internet
https://www.garlic.com/~lynn/2022h.html#11 Computer History IBM 305 RAMAC and 650 RAMAC, 1956 (350 Disk Storage)
https://www.garlic.com/~lynn/2022h.html#8 Elizabeth Warren to Jerome Powell: Just how many jobs do you plan to kill?
https://www.garlic.com/~lynn/2022h.html#3 AL Gore Invented The Internet
https://www.garlic.com/~lynn/2022g.html#95 Iconic consoles of the IBM System/360 mainframes, 55 years old
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#88 IBM Cambridge Science Center Performance Technology
https://www.garlic.com/~lynn/2022g.html#84 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#82 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#80 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#75 RS/6000 (and some mainframe)
https://www.garlic.com/~lynn/2022g.html#74 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022g.html#72 Mainframe and/or Cloud
https://www.garlic.com/~lynn/2022g.html#68 Datacenter Vulnerability
https://www.garlic.com/~lynn/2022g.html#67 30 years of (IBM) Management Briefings
https://www.garlic.com/~lynn/2022g.html#66 IBM Dress Code
https://www.garlic.com/~lynn/2022g.html#65 IBM DPD
https://www.garlic.com/~lynn/2022g.html#62 IBM DPD
https://www.garlic.com/~lynn/2022g.html#60 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#58 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#57 IBM changes to retirement benefits
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#52 IBM changes to retirement benefits
https://www.garlic.com/~lynn/2022g.html#49 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#47 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#44 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#43 Some BITNET (& other) History
https://www.garlic.com/~lynn/2022g.html#40 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022g.html#31 Sears is shutting its last store in Illinois, its home state
https://www.garlic.com/~lynn/2022g.html#24 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022g.html#23 IBM APL
https://www.garlic.com/~lynn/2022g.html#20 9/11
https://www.garlic.com/~lynn/2022g.html#5 IBM Tech Editor
https://www.garlic.com/~lynn/2022g.html#3 IBM Wild Ducks
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#122 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#120 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#119 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#118 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022f.html#110 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#108 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#105 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#101 Father, Son, and Co.: My Life at IBM and Beyond
https://www.garlic.com/~lynn/2022f.html#95 VM I/O
https://www.garlic.com/~lynn/2022f.html#85 IBM CKD DASD
https://www.garlic.com/~lynn/2022f.html#82 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#79 Why the Soviet computer failed
https://www.garlic.com/~lynn/2022f.html#73 IBM/PC
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#71 COMTEN - IBM Clone Telecommunication Controller
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022f.html#67 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#61 200TB SSDs could come soon thanks to Micron's new chip
https://www.garlic.com/~lynn/2022f.html#60 John Boyd and IBM Wild Ducks
https://www.garlic.com/~lynn/2022f.html#57 The Man That Helped Change IBM
https://www.garlic.com/~lynn/2022f.html#53 z/VM 50th - part 4
https://www.garlic.com/~lynn/2022f.html#51 IBM Career
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#47 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#46 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#44 z/VM 50th
https://www.garlic.com/~lynn/2022f.html#42 IBM Bureaucrats
https://www.garlic.com/~lynn/2022f.html#28 IBM Power: The Servers that Apple Should Have Created
https://www.garlic.com/~lynn/2022f.html#6 What is IBM SNA? --
virtualization experience starting Jan1968, online at home since Mar1970

The Pentagon Saw a Warship Boondoggle

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Pentagon Saw a Warship Boondoggle
Date: 04 Feb, 2023
Blog: Facebook
The Pentagon Saw a Warship Boondoggle. Congress Saw Jobs. After years of crippling problems and a changing mission, the Navy pushed to retire nine of its newest ships. Then the lobbying started.
https://www.nytimes.com/2023/02/04/us/politics/littoral-combat-ships-lobbying.html
But the Pentagon last year made a startling announcement: Eight of the 10 Freedom-class littoral combat ships now based in Jacksonville and another based in San Diego would be retired, even though they averaged only four years old and had been built to last 25 years.

... snip ...

The Pentagon Labyrinth
https://www.pogo.org/podcasts/pentagon-labyrinth
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html
http://dnipogo.org/labyrinth/

similar .... F22 hangar empress (2009) "Can't Fly, Won't Die"
http://nypost.com/2009/07/17/cant-fly-wont-die/
Pilots call high-maintenance aircraft "hangar queens." Well, the F-22's a hangar empress. After three expensive decades in development, the plane meets fewer than one-third of its specified requirements. Anyway, an enemy wouldn't have to down a single F-22 to defeat it. Just strike the hi-tech maintenance sites, and it's game over. (In WWII, we didn't shoot down every Japanese Zero; we just sank their carriers.) The F-22 isn't going to operate off a dirt strip with a repair tent.

But this is all about lobbying, not about lobbing bombs. Cynically, Lockheed Martin distributed the F-22 workload to nearly every state, employing under-qualified sub-contractors to create local financial stakes in the program. Great politics -- but the result has been a quality collapse.


... snip ...

military-industrial(-congressional) complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war

some past posts mentioning navy littoral ships
https://www.garlic.com/~lynn/2017c.html#27 Pentagon Blocks Littoral Combat Ship Overrun From a GAO Report
https://www.garlic.com/~lynn/2016b.html#91 Computers anyone?
https://www.garlic.com/~lynn/2015.html#68 IBM Data Processing Center and Pi
https://www.garlic.com/~lynn/2014d.html#69 Littoral Warfare Ship
https://www.garlic.com/~lynn/2012n.html#22 Preparing for War with China

past posts mentioning "pentagon labyringth" and/or "hangar empress"
https://www.garlic.com/~lynn/2022h.html#18 Sun Tzu, Aristotle, and John Boyd
https://www.garlic.com/~lynn/2022g.html#55 F-35A fighters unreliable, 'unready 234 times over 18-month period'
https://www.garlic.com/~lynn/2022c.html#105 The Bunker: Pentagon Hardware Hijinks
https://www.garlic.com/~lynn/2021g.html#87 The Bunker: Follow All of the Money. F-35 Math 1.0 Another portent of problems
https://www.garlic.com/~lynn/2021f.html#63 'A perfect storm': Airmen, F-22s struggle at Eglin nearly three years after Hurricane Michael
https://www.garlic.com/~lynn/2021e.html#88 The Bunker: More Rot in the Ranks
https://www.garlic.com/~lynn/2021c.html#82 The F-35 and other Legacies of Failure
https://www.garlic.com/~lynn/2019e.html#118 Pentagon: The F-35 breaks down too often and takes too long to repair
https://www.garlic.com/~lynn/2019d.html#104 F-35
https://www.garlic.com/~lynn/2018e.html#83 The Pentagon's New Stealth Bookkeeping
https://www.garlic.com/~lynn/2018c.html#108 F-35
https://www.garlic.com/~lynn/2018c.html#63 The F-35 has a basic flaw that means an F-22 hybrid could outclass it -- and that's a big problem
https://www.garlic.com/~lynn/2018c.html#14 Air Force Risks Losing Third of F-35s If Upkeep Costs Aren't Cut
https://www.garlic.com/~lynn/2017c.html#51 F-35 Replacement: F-45 Mustang II Fighter -- Simple & Lightweight
https://www.garlic.com/~lynn/2016h.html#93 F35 Program
https://www.garlic.com/~lynn/2016h.html#76 The F-35 Stealth Fighter Is Politically Unstoppable----Even Under President Trump
https://www.garlic.com/~lynn/2016h.html#40 The F-22 Raptor Is the World's Best Fighter (And It Has a Secret Weapon That Is Out in the Open)
https://www.garlic.com/~lynn/2016e.html#61 5th generation stealth, thermal, radar signature
https://www.garlic.com/~lynn/2016b.html#95 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#92 Computers anyone?
https://www.garlic.com/~lynn/2016b.html#91 Computers anyone?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 04 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341

... response in 4341 discussion comment about MVS using 370 "RRB" (reset reference bit) instruction.

CP67 release 1 ... delivered to univ (3nd installation, after mit lincoln labs and cambridge itself) would look for pages to replace by scanning all memory looking for 1st page that didn't belong to executing ("in-queue") program ... if none found, it would then start scan over and just pick the first page found ... and then reset next scan start to the immediate next page.

I rewrote it to be highly optimized "global" least recently used algorithm ... (using ISK/SSK ... sort of emulating RRB, which wasn't available on 360/67). Note at the time, the virtual memory academic literature was about "local" LRU algorithms. I also did a variation on dynamic working set for limiting concurrently executing programs (as countermeasure to page thrashing). After joining IBM Cambridge Science Center, my changes were shipped in CP67 Release 3.

IBM Grenoble Scientific Center then modified the CP67 system to implement "local" LRU algorithm for their 1mbyte 360/67 (155 page'able pages after fixed memory requirements). Grenoble had very similar workload as Cambridge but their throughput for 35users (local LRU) was about the same as Cambrige 768kbyte 360/67 (104 page'able pages) with 80 users (and global LRU) ... aka global LRU outperformed "local LRU" with more than twice the number of users and only 2/3rds the available memory.

Then there was decision to add virtual memory to all 370s. A decade ago, I was asked to track down the decision and found somebody that was staff to the executive. Basically MVT storage management was so bad that region sizes had to be specified four times larger than actually used ... typical 1mbyte 370/165 only had enough room four concurrent executing regions, insufficient to keep machine busy (and justified). Going to VS2/SVS (similar to running MVT in CP67 16mbyte virtual machine) allowed number of regions to be increased by four times with little or no paging. I got into dustup with the POK performance group over what LRU replacement algorithm met. They had identified a myopic optimization that scanning for non-changed pages (for replacement) ... met not having to wait for the replaced page to be written out. It wasn't until late in the 70s with MVS ... that they realized that they were selecting high-use linkpack pages for replacement before they would replace low-use application (changed) data pages. Pieces of email exchange about adding virtual memory to all 370s.
https://www.garlic.com/~lynn/2011d.html#73
recent article
https://www.linkedin.com/pulse/zvm-50th-part-5-lynn-wheeler/

Jim Gray had departed IBM SJR for Tandem in fall of 1980. A year later, at Dec81 ACM SIGOPS meeting, he asked me to help a co-worker get his Stanford PHD that heavily involved global LRU (and the "local LRU" forces from the 60s academic work, were heavily lobbying Stanford to not award a PHD for anything involving global LRU). Jim knew I had detailed stats on the Cambridge/Grenoble global/local LRU comparison (showing global significantly outperformed local). IBM executives stepped in and blocked me sending a response for nearly a year (I hoped it was part of the punishment for being blamed for online computer conferencing in the late 70s through the early 80s on the company internal network ... and not that they were meddling in the academic dispute). Part of eventual response
https://www.garlic.com/~lynn/2006w.html#email821019
recent related mentioning online computer conferencing
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

paging and page replacement posts
https://www.garlic.com/~lynn/subtopic.html#wsclock
cambridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet

past posts about POK page replacement "optimization" involving non-changed pages and/or the decision to add virtual memory to all 370s
https://www.garlic.com/~lynn/2021c.html#38 Some CP67, Future System and other history
https://www.garlic.com/~lynn/2021b.html#59 370 Virtual Memory
https://www.garlic.com/~lynn/2017j.html#84 VS/Repack
https://www.garlic.com/~lynn/2014i.html#96 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
https://www.garlic.com/~lynn/2014d.html#57 Difference between MVS and z / OS systems
https://www.garlic.com/~lynn/2014c.html#71 assembler
https://www.garlic.com/~lynn/2013g.html#11 What Makes code storage management so cool?
https://www.garlic.com/~lynn/2012c.html#28 5 Byte Device Addresses?
https://www.garlic.com/~lynn/2012c.html#17 5 Byte Device Addresses?
https://www.garlic.com/~lynn/2011.html#44 CKD DASD
https://www.garlic.com/~lynn/2010g.html#42 Interesting presentation
https://www.garlic.com/~lynn/2008o.html#50 Old XDS Sigma stuff
https://www.garlic.com/~lynn/2008f.html#19 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2007r.html#65 CSA 'above the bar'
https://www.garlic.com/~lynn/2007p.html#74 GETMAIN/FREEMAIN and virtual storage backing up
https://www.garlic.com/~lynn/2007c.html#56 SVCs
https://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
https://www.garlic.com/~lynn/2006j.html#25 virtual memory
https://www.garlic.com/~lynn/2006j.html#17 virtual memory
https://www.garlic.com/~lynn/2006i.html#43 virtual memory
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004g.html#55 The WIZ Processor
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2002c.html#52 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2000c.html#35 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM/PC and Microchannel

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM/PC and Microchannel
Date: 05 Feb, 2023
Blog: Facebook
another communication group death grip story about severely performance kneecapping microchannel cards as part of their fierce battle fighting off distributed computing and client/server; AWD (workstation division) had PC/RT with AT-bus and had done their own 4mbit token-ring card. Then for the RS/6000 with microchannel, AWD was told they could only use PS2 microchannel cards. One of the examples was the PS2 16mbit token-ring had lower card throughput than the PC/RT 4mbit token-ring card (i.e. a PC/RT 4mbit token-ring server would have higher throughput than RS/6000 16mbit token-ring server, 16mbit t/r microchannel cards design point w/300+ stations doing terminal emulation on single network; aka NOT client/sever). Joke was that RS/6000 limited to the severely performance kneecapped PS2 microchannel cards wouldn't have better throughput than PS2/486 (token-ring, display, disks, etc).

Note that new research Almaden bldg had been heavily provisioned with CAT4 assuming token-ring. However they would find that 10mbit ethernet over CAT4 had more aggregate LAN bandwidth and lower latency than 16mbit token-ring ... and $69 fast ethernet cards had higher per-card throughput than IBM's $800 microchannel 16mbit token-ring cards.

Late 80s, senior disk engineer got talk scheduled at internal, world-wide, annual communication group conference supposedly on 3174 performance, but opened the talked with statement that the communication group would be responsible for the demise of the disk division. Scenario was that communication group had stranglehold on mainframe datacenters with their corporate strategic ownership of everything that crossed datacenter walls (and were fiercely fighting off distributed computing and client/server). The disk division was seeing data fleeing mainframe datacenters to more distributed computing friendly platforms, with drop in disk sales. The disk division had come up with a number of solutions, but they were constantly being vetoed by the communnication group. GPD/Adstar VP partial countermeasure (to communication group death grip) was investing in distributed computing startups that would use IBM disks. He would have us in periodically to discuss his investments and asked us if we could drop by and provide any assistance. One was "MESA Archival" in Boulder ... spinoff startup of NCAR

misc. more about science center co-worker responsible for the internal network (not SNA) ... covers some of his battle with corporate over TCP/IP
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/

posts about communication group fighting off client/server and distributed computing
https://www.garlic.com/~lynn/subnetwork.html#terminal
801/risc, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech

--
virtualization experience starting Jan1968, online at home since Mar1970

The Progenitor of Inequalities - Corporate Personhood vs. Human Beings

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Progenitor of Inequalities - Corporate Personhood vs. Human Beings
Date: 07 Feb, 2023
Blog: Facebook
The Progenitor of Inequalities - Corporate Personhood vs. Human Beings
https://www.counterpunch.org/2023/02/06/the-progenitor-of-inequalities-corporate-personhood-vs-human-beings/
How is that possible with the 14th Amendment mandating equal protection under the law? Because this central provision for our alleged rule of law didn't take into account the contrivances of corporate lawyers, corporate judges and corporate-indentured lawmakers.

Corporations that are created by state charters are deemed "artificial persons." States like Delaware and Nevada have made a revenue business out of chartering corporations under permissive laws that concentrate power at the top of autocratic commercial hierarchies, leaving their shareholder-owners with very few options other than to sell. Since the early 1800s, states have chartered corporations giving their shareholders limited liability. The maximum they can lose is the amount of dollars invested in their company's stocks or bonds. The modern history of corporate law is now aimed at maximizing the limited liability of the corporation itself.

.... snip ...

railroads scammed the supreme court
https://www.amazon.com/We-Corporations-American-Businesses-Rights-ebook/dp/B01M64LRDJ/
pgxiii/loc45-50:
IN DECEMBER 1882, ROSCOE CONKLING, A FORMER SENATOR and close confidant of President Chester Arthur, appeared before the justices of the Supreme Court of the United States to argue that corporations like his client, the Southern Pacific Railroad Company, were entitled to equal rights under the Fourteenth Amendment. Although that provision of the Constitution said that no state shall "deprive any person of life, liberty, or property, without due process of law" or "deny to any person within its jurisdiction the equal protection of the laws," Conkling insisted the amendment's drafters intended to cover business corporations too.

pgxiv/loc74-78:
Between 1868, when the amendment was ratified, and 1912, when a scholar set out to identify every Fourteenth Amendment case heard by the Supreme Court, the justices decided 28 cases dealing with the rights of African Americans--and an astonishing 312 cases dealing with the rights of corporations.

pg36/loc726-28:
On this issue, Hamiltonians were corporationalists--proponents of corporate enterprise who advocated for expansive constitutional rights for business. Jeffersonians, meanwhile, were populists--opponents of corporate power who sought to limit corporate rights in the name of the people.

pg229/loc3667-68:
IN THE TWENTIETH CENTURY, CORPORATIONS WON LIBERTY RIGHTS, SUCH AS FREEDOM OF SPEECH AND RELIGION, WITH THE HELP OF ORGANIZATIONS LIKE THE CHAMBER OF COMMERCE.

... snip ...

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
racism posts
https://www.garlic.com/~lynn/submisc.html#racism

--
virtualization experience starting Jan1968, online at home since Mar1970

The Enormous Limitations of U.S. Liberal Democracy and Its Consequences: The Growth Of Fascism

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Enormous Limitations of U.S. Liberal Democracy and Its Consequences: The Growth Of Fascism
Date: 07 Feb, 2023
Blog: Facebook
The Enormous Limitations of U.S. Liberal Democracy and Its Consequences: The Growth Of Fascism. THE U.S. HAS ONE OF THE LEAST DEMOCRATIC SYSTEMS EXISTING TODAY IN THE DEMOCRATIC WORLD
https://www.counterpunch.org/2023/02/06/the-enormous-limitations-of-u-s-liberal-democracy-and-its-consequences-the-growth-of-fascism/
This bias has created fertile conditions in the U.S. for fascism to grow. As someone who lived under a fascist regime in Spain, and knows fascism when I see it, I am alarmed by the growth of the ultra-right, with similar characteristics to the fascism I knew. Its growth is a consequence of the grave limitations of U.S. liberal democracy. Thus, it is premature to assume that a far-right takeover has been averted; on the contrary, it is time for urgent mobilization to stop it.

... snip ...

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

ASCII/TTY Terminal Support

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: ASCII/TTY Terminal Support
Date: 07 Feb, 2023
Blog: Facebook
Science center installed CP67 at univ (3rd installation after cambridge itself and MIT lincoln labs). It came with automagic terminal type identification support for 2741 and 1052. The univ had some number of ascii/tty (trivia: 360 terminal controller support for tty/ascii came in box labled "heathkit"). I added ascii/tty support and extended automagic terminal type identification to tty/ascii. I then wanted to have single dialup number for all terminal types
https://en.wikipedia.org/wiki/Line_hunting

... didn't quite work, it was possible to switch port line scanner type ... but IBM had taken short cut and hard wired each port line speed.

Thus was born the univ project to build our own clone controller ... building (wire-wrap) channel interface board for Interdata/3 programmed to emulate the IBM controller with the addition of supporting automatic line speed. Later it was enhanced with Interdata/4 for the channel interface and cluster of Interdata/3s for the port interfaces. Interdata (and later Perkin/Elmer) sell it commercially as IBM clone controller. Four of us at the univ. get written up responsible for (some part of the) clone controller business.
https://en.wikipedia.org/wiki/Interdata
https://en.wikipedia.org/wiki/Perkin-Elmer#Computer_Systems_Division
https://en.wikipedia.org/wiki/Concurrent_Computer_Corporation

trivia: 360s were supposed to go be 8-bit ASCII ... but (comedy? of) circumstances, it went to EBCDIC instead (gone 404, but lives on at wayback machine):
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/P-BIT.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/ASCII.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/BACSLASH.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/FATHEROF.HTM
https://web.archive.org/web/20180513184025/http://www.bobbemer.com/HISTORY.HTM

some of the MIT CTSS/7094 people went to the 5th flr for Project MAC and Multics. Others went to the science center on 4th flr and did virtual machines, internal network, various online and performance apps, intented GML in 1969 (which morphs into ISO standard SGML after a decade and after another decade morphs into HTML at CERN)

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML/SGML/HTML posts
https://www.garlic.com/~lynn/submain.html#sgml
plug compatible controllers posts
https://www.garlic.com/~lynn/submain.html#360pcm

CTSS
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
Multics
https://en.wikipedia.org/wiki/Multics
CP67 & CMS
https://en.wikipedia.org/wiki/CP/CMS
https://en.wikipedia.org/wiki/Conversational_Monitor_System
Melinda's VM history
https://www.leeandmelindavarian.com/Melinda#VMHist
--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 08 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#76 IBM 4341

VM4300s sold into the same mid-range market as DEC VAX/VMS ... and in about the same numbers except for the large corporations ordering hundreds at a time for placing out in departmental distributed computings areas. The 4361/4381 were planning on same explosion ... but this has decade of VAX/VMS sliced&diced by model, year, US/non-US ... and by the mid-80s the mid-range market was starting to shift to workstations and large PC (servers).
https://www.garlic.com/~lynn/2002f.html#0

--
virtualization experience starting Jan1968, online at home since Mar1970

Memories of Mosaic

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Memories of Mosaic
Date: 09 Feb, 2023
Blog: Linkedin
We were doing HA/CMP ... it had originally started out as HA/6000 for the NYTimes to move their newspaper system (ATEX) off VAX/Cluster to RS/6000. I then rename it HA/CMP (High Availability Cluster Multi-Processing) when start doing technical cluster scale-up with the national labs and commercial cluster scale-up with the RDBMS vendors (Ingres, Informix, Sybase, and Oracle who have Vaxcluster support in common source base with their other products and I do API with vaxcluster semantics to simplify the port).
https://en.wikipedia.org/wiki/IBM_High_Availability_Cluster_Multiprocessing
Old archive post with reference to Jan92 cluster scale-up meeting with Oracle CEO (16processor by mid92, 128processor by ye92)
https://www.garlic.com/~lynn/95.html#13

Within a few weeks of the Ellison meeting, cluster scale-up is transferred, announced as IBM supercomputer (for technical/scientific *ONLY*) and we were told we couldn't work on anything with more than four processors. We leave IBM a few months later. Later we are brought in as consultants with small client/server startup. Two of the former Oracle people (we had worked with on cluster scale-up) were there responsible for something called "commerce server" and wanted to do payment transactions on the server. The startup had also invented this technology they called SSL they wanted to use, it is now frequently called "electronic commerce". I had absolute authority for everything between the webservers and the financial industry payment networks. I then created "Why Internet Isn't Business Critical Dataprocessing" talk based on the work I had to do for electronic commerce.

Starting in early 80s, had HSDT project, T1 (1.5mbits/sec) and faster computer links; and was suppose to get $20M from NSF director to interconnect the NSF supercomputer sites. Then congress cuts the budget, some other things happen and finally an RFP is released (based in part on what we already had running). Preliminary announcement (28Mar1986)
https://www.garlic.com/~lynn/2002k.html#12
The OASC has initiated three programs: The Supercomputer Centers Program to provide Supercomputer cycles; the New Technologies Program to foster new supercomputer software and hardware developments; and the Networking Program to build a National Supercomputer Access Network - NSFnet.

... snip ...

IBM internal politics not allowing us to bid (being blamed for online computer conferencing (precursor to social media) inside IBM, likely contributed, folklore is that 5of6 members of corporate executive committee wanted to fire me). The NSF director tried to help by writing the company a letter 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) with support from other gov. agencies ... but that just made the internal politics worse (as did claims that what we already had operational was at least 5yrs ahead of the winning bid, RFP awarded 24Nov87), as regional networks connect in, it becomes the NSFNET backbone, precursor to modern internet
https://www.technologyreview.com/s/401444/grid-computing/

trivia1: the first webserver in the US was Stanford SLAC (CERN sister installation) on their VM370 system
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

trivia2: OASC (28mar1986 preliminary announce) funding for software went to NCSA
https://beta.nsf.gov/news/mosaic-launches-internet-revolution
developed MOSAIC (browser). Some of the people left NCSA and did MOSAIC startup. Name changed to NETSCAPE when NCSA complained about the use of "MOSAIC". trivia3: what silicon valley company provided them with the name "NETSCAPE"?

As load on early webservers started to ramp up, there was six months or so where they would hit 90% CPU running the FINWAIT list (before vendors started releasing fixes). NETSCAPE installed large Sequent multiprocessor, which had previously fixed the FINWAIT problem in Dynix.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
NSFNET posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
NETSCAPE gateway to payment network posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

archived posts mentioning FINWAIT & Sequent
https://www.garlic.com/~lynn/2023.html#42 IBM AIX
https://www.garlic.com/~lynn/2022e.html#28 IBM "nine-net"
https://www.garlic.com/~lynn/2021k.html#80 OSI Model
https://www.garlic.com/~lynn/2021h.html#86 IBM Research, Adtech, Science Center
https://www.garlic.com/~lynn/2019.html#74 21 random but totally appropriate ways to celebrate the World Wide Web's 30th birthday
https://www.garlic.com/~lynn/2018f.html#102 Netscape: The Fire That Filled Silicon Valley's First Bubble
https://www.garlic.com/~lynn/2018d.html#63 tablets and desktops was Has Microsoft
https://www.garlic.com/~lynn/2017i.html#45 learning Unix, was progress in e-mail, such as AOL
https://www.garlic.com/~lynn/2017c.html#54 The ICL 2900
https://www.garlic.com/~lynn/2017c.html#52 The ICL 2900
https://www.garlic.com/~lynn/2016e.html#127 Early Networking
https://www.garlic.com/~lynn/2016e.html#43 How the internet was invented
https://www.garlic.com/~lynn/2015g.html#96 TCP joke
https://www.garlic.com/~lynn/2015f.html#71 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2014g.html#13 Is it time for a revolution to replace TLS?
https://www.garlic.com/~lynn/2013i.html#46 OT: "Highway Patrol" back on TV
https://www.garlic.com/~lynn/2013h.html#8 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2013c.html#83 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012i.html#15 Can anybody give me a clear idea about Cloud Computing in MAINFRAME ?
https://www.garlic.com/~lynn/2012d.html#20 Writing article on telework/telecommuting
https://www.garlic.com/~lynn/2011g.html#11 Is the magic and romance killed by Windows (and Linux)?
https://www.garlic.com/~lynn/2010p.html#9 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
https://www.garlic.com/~lynn/2010m.html#51 Has there been a change in US banking regulations recently?
https://www.garlic.com/~lynn/2010b.html#62 Happy DEC-10 Day
https://www.garlic.com/~lynn/2009e.html#7 IBM in Talks to Buy Sun
https://www.garlic.com/~lynn/2008p.html#36 Making tea
https://www.garlic.com/~lynn/2008m.html#28 Yet another squirrel question - Results (very very long post)
https://www.garlic.com/~lynn/2006m.html#37 Curiosity
https://www.garlic.com/~lynn/2005o.html#13 RFC 2616 change proposal to increase speed
https://www.garlic.com/~lynn/2005g.html#42 TCP channel half closed
https://www.garlic.com/~lynn/2003h.html#50 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003e.html#33 A Speculative question
https://www.garlic.com/~lynn/2002q.html#12 Possible to have 5,000 sockets open concurrently?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002i.html#39 CDC6600 - just how powerful a machine was it?

--
virtualization experience starting Jan1968, online at home since Mar1970

Memories of Mosaic

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Memories of Mosaic
Date: 10 Feb, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic

misc recent other with network/internet history
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

got misc archived stuff from 1994 ... directory with a little from NCSA
Aug 25 1994 strict-html.html.gz Aug 25 1994 htmldoc.html.gz Aug 25 1994 html-design.html.gz Aug 25 1994 about_html.html.gz Aug 25 1994 HTML_quick.html.gz Aug 25 1994 Introduction.html.gz Aug 25 1994 SGML.html.gz Aug 25 1994 MarkUp.html.gz Aug 25 1994 demo.html.gz Aug 25 1994 html-primer.html.gz Apr 24 1994 README-binaries.gz Apr 20 1994 url-primer.ps.gz Apr 20 1994 mosaic.ps.gz Apr 20 1994 html-primer.ps.gz Apr 20 1994 getting-started.ps.gz Apr 20 1994 README.incoming.gz Apr 20 1994 README.Mosaic-2.0.gz Apr 20 1994 README.Mosaic.gz Apr 20 1994 sunil.html.gz Apr 19 1994 Mosaic-Security-Issues.gz Apr 19 1994 Mosaic-for-Microsoft-Windows-Home-Page.gz Apr 19 1994 Starting-Points-for-Internet-Exploration.gz Apr 19 1994 About-NCSA-Mosaic-for-the-X-Window-System.gz Apr 19 1994 NCSA-Mosaic-Demo-Document.gz Apr 19 1994 NCSA-Mosaic-Home-Page.gz Apr 19 1994 InterNIC-Info-Source.gz Apr 19 1994 The-World-Wide-Web-Initiative.gz Apr 19 1994 Overview-of-the-Web.gz

... snip ...

--
virtualization experience starting Jan1968, online at home since Mar1970

Memories of Mosaic

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Memories of Mosaic
Date: 10 Feb, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#83 Memories of Mosaic

other trivia:

some of the MIT CTSS/7094 people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
had gone to project MAC on 5th flr, for multics
https://en.wikipedia.org/wiki/Multics
others went to the IBM science center on the 4th flr and did virtual machines (cp40/cms & cp67/cms, precursor to vm370),
https://en.wikipedia.org/wiki/CP/CMS
online & performance apps, internal corporate network (see zvm-50th-part-3 account), etc. CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF#CTSS

was redone as "script" for CMS. Then GML was invented at the science center in 1969 (letters chosen from 1st letters of inventors last name) and GML tag processing added to script.
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm
after another decade, GML morphs into ISO standard SGML ... and then after another decade, SGML morphs into HTML at CERN.

Stanford SLAC was heavy VM370 user (follow-on to CP67) and hosted the monthly bay area user group meetings "BAYBUNCH". SLAC (CERN sister installation) hosted 1st webserver in the US on its VM370 system:
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML, SGML history
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

Memories of Mosaic

From: Lynn Wheeler <lynn@garlic.com>
Subject: Memories of Mosaic
Date: 11 Feb, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#83 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#84 Memories of Mosaic

while working on netscape "electronic commerce", was also doing work with one of the payment card processing outsourcers that had hdqtrs office on Hansen Way (block or two from El Camino). I get a call from east coast that largest online service provider was having its internet facing servers crashing ... had started the month before and had all the usual experts in to diagnose the problem. He was going to fly out and buy me a hamburger after work (restaurant/cafe on el camino just south of Hansen Way). While I eat the hamburger, he describes the symptoms. I then say that was one of the problems identified (sort of a crack between the standards specification and implementation) when I was doing IBM HA/CMP ... and provide a quick&dirty patch that he applies later that night. I then try and get the standard server vendors to incorporate a fix ... but they said nobody was complaining (i.e. & service provider didn't want it in the news). Exactly a year later it hit the press when a (different) service provider in NY was having the problem ... and the server vendors pat themselves on the back that they were able to ship fixes within a month.

HA/CMP posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

archived posts mentioning largest online service processor
https://www.garlic.com/~lynn/2021c.html#69 Online History
https://www.garlic.com/~lynn/2017h.html#119 AOL
https://www.garlic.com/~lynn/2017g.html#81 Running unsupported is dangerous was Re: AW: Re: LE strikes again
https://www.garlic.com/~lynn/2017e.html#15 The Geniuses that Anticipated the Idea of the Internet
https://www.garlic.com/~lynn/2016d.html#79 Is it a lost cause?
https://www.garlic.com/~lynn/2015e.html#25 The real story of how the Internet became so vulnerable
https://www.garlic.com/~lynn/2015c.html#104 On a lighter note, even the Holograms are demonstrating
https://www.garlic.com/~lynn/2012o.html#68 PC/mainframe browser(s) was Re: 360/20, was 1132 printer history
https://www.garlic.com/~lynn/2012k.html#60 Core characteristics of resilience
https://www.garlic.com/~lynn/2009g.html#11 Top 10 Cybersecurity Threats for 2009, will they cause creation of highly-secure Corporate-wide Intranets?
https://www.garlic.com/~lynn/2008n.html#35 Builders V. Breakers
https://www.garlic.com/~lynn/2008l.html#21 recent mentions of 40+ yr old technology
https://www.garlic.com/~lynn/2008b.html#34 windows time service
https://www.garlic.com/~lynn/2007p.html#40 what does xp do when system is copying
https://www.garlic.com/~lynn/2006e.html#11 Caller ID "spoofing"
https://www.garlic.com/~lynn/2005c.html#51 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/aadsm26.htm#17 Changing the Mantra -- RFC 4732 on rethinking DOS

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose
Date: 11 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#81 IBM 4341

... downhill started late 60/early 70s ... then Learson trying to block bureaucrats and careerists (and MBAs) from destroying Watsons' legacy (see if this gets lost in pending)
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

and getting to play disk engineer in both bldg14&bldg15
https://www.linkedin.com/pulse/mainframe-channel-io-lynn-wheeler/
https://www.linkedin.com/pulse/mainframe-channel-redrive-lynn-wheeler/
and engineering 3033 and 4341 in bldg15
https://www.linkedin.com/pulse/ibm4341-lynn-wheeler

getting to play disk engineer in bldg14&bldg15 archived posts
https://www.garlic.com/~lynn/subtopic.html#disk

3380 trivia: original 3380 had 20 track spacings between each data track, that was then cut in half to double number of tracks (& cylinders) for 3380E, and spacing cut again to triple number of tracks (& cylinders) for 3380K ... still same 3mbyte/sec channels. The (IBM) father of risc computing tries to get me to help with his "wide-head" idea ... that handles 18 closely spaced tracks ... surface formated with 16 data tracks plus servo track ... the "wide-head" would follow two servo-tracks on each side of the 16 data tracks transferring data at 3mbytes/sec from each track, 48mbytes/sec aggregate. Problem was IBM mainframe I/O wouldn't support 48mbyte/sec I/O ... anymore than they would support 48mbyte/sec RAID I/O.

801/risc, romp, rios, pc/rt, rs/6000, power, power/pc archived posts
https://www.garlic.com/~lynn/subtopic.html#801

archived posts mentioning disk "wide-head" proposal
https://www.garlic.com/~lynn/2022f.html#61 200TB SSDs could come soon thanks to Micron's new chip
https://www.garlic.com/~lynn/2019b.html#52 S/360
https://www.garlic.com/~lynn/2019.html#58 Bureaucracy and Agile
https://www.garlic.com/~lynn/2018b.html#111 Didn't we have this some time ago on some SLED disks? Multi-actuator
https://www.garlic.com/~lynn/2017g.html#95 Hard Drives Started Out as Massive Machines That Were Rented by the Month
https://www.garlic.com/~lynn/2017d.html#54 GREAT presentation on the history of the mainframe

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose
Date: 11 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#81 IBM 4341
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose

Very early 80s, (san jose) bldg26 datacenter had run out of space to meet increased (MVS) computing requirements. Bldg26 was looking at the burgeoning vm/4341s being deployed in departmental areas ... however the CKD simulation (3375) wasn't yet available and ... and the only non-datacenter disks were the 3370 FBA (which MVS didn't support). There were also some studies of MVS system 168-3 CPU use mapping to (MVS?) 4341s. One problem showed up was that their analysis was only MVS "capture" CPU ... and much of those MVS systems only had a 50% "capture-ratio" (i.e. the other 50% of the CPU use was uncaptured/unaccounted for; MVS did measure "wait state" time ... so could infer aggregate total CPU by subtracting total wait state from total elapsed).

Some of the major bldg26 MVS applications required more OS/360 services than provided by the 64kbyte OS/360 simulation in CMS ... however some work out in Los Gatos/bldg29, found that with 12kbytes more in OS/360 simulation code, they could transfer much of the MVS applications that previously wouldn't port to CMS (joke that the CMS 64kbyte OS/360 was more efficient than the MVS 8mbyte OS/360 simulation).

Other trivia: in mid-70s, the VM370/CMS group was greatly expanding the CMS OS/360 simulation (including being able to directly support OS/360 CKD disk I/O). However this was in the period after the "Future System" implosion, quick&dirty efforts to get stuff back into the 370 product pipelines, and head of POK convincing corporate to kill the VM370 product, shutdown the development group and transfer all the people to POK for MVS/XA (claiming otherwise MVS/XA wouldn't be ready to ship on time). In that shutdown ... all the VM370/CMS in-progress & unshipped product work seem to evaporate. Note: Endicott eventually was able to acquire VM370/CMS product mission, but had to reconstitute a development group from scratch.

future system archived posts
https://www.garlic.com/~lynn/submain.html#futuresys

archived posts mentioning MVS "capture-ratio"
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2022.html#21 Departmental/distributed 4300s
https://www.garlic.com/~lynn/2021c.html#88 IBM SNA/VTAM (& HSDT)
https://www.garlic.com/~lynn/2017i.html#73 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017d.html#51 CPU Timerons/Seconds vs Wall-clock Time
https://www.garlic.com/~lynn/2015f.html#68 1973--TI 8 digit electric calculator--$99.95
https://www.garlic.com/~lynn/2014b.html#80 CPU time
https://www.garlic.com/~lynn/2014b.html#78 CPU time
https://www.garlic.com/~lynn/2013d.html#14 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2013d.html#8 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012j.html#71 Help with elementary CPU speed question
https://www.garlic.com/~lynn/2012h.html#70 How many cost a cpu second?
https://www.garlic.com/~lynn/2010m.html#39 CPU time variance
https://www.garlic.com/~lynn/2010e.html#76 LPARs: More or Less?
https://www.garlic.com/~lynn/2010e.html#33 SHAREWARE at Its Finest
https://www.garlic.com/~lynn/2010d.html#66 LPARs: More or Less?
https://www.garlic.com/~lynn/2008.html#42 Inaccurate CPU% reported by RMF and TMON
https://www.garlic.com/~lynn/2007g.html#82 IBM to the PCM market
https://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?

--
virtualization experience starting Jan1968, online at home since Mar1970

Northern Va. is the heart of the internet. Not everyone is happy about that

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Northern Va. is the heart of the internet. Not everyone is happy about that.
Date: 12 Feb, 2023
Blog: Facebook
Northern Va. is the heart of the internet. Not everyone is happy about that.
https://www.washingtonpost.com/dc-md-va/2023/02/10/data-centers-northern-virginia-internet/

other internet trivia:
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
https://www.linkedin.com/pulse/memories-mosaic-lynn-wheeler/

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
electronic commerce (and internet gateway to payment network) posts
https://www.garlic.com/~lynn/subnetwork.html#gateway

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose
Date: 12 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#81 IBM 4341
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2023.html#87 IBM San Jose

channel trivia: 1980 STL was bursting at the seams and 300 people from the IMS group were being moved to offsite bldg with dataprocessing back to the STL datacenter. They had tried "remote 3270", but found the human factors totally unacceptable. I get con'ed into doing channel-extender support, placing channel attached 3270s at the offsite bldg ... with no perceived difference in human factors between offsite and in STL. The hardware vendor then tries to get IBM to release my support, but there are some engineers in POK playing with some serial stuff and get it veto'ed (concerned that if it was in the market, it would make it harder to justify releasing their stuff).

In 1988, the IBM branch office asks me to help LLNL get some serial stuff, they are playing with, standardized ... which quickly becomes Fibre Channel Standard (FCS, including some stuff I had done in 1980), initially gbit, full-duplex (2gbit/sec aggregate, i.e. 200mbytes/sec). Then in 1990, the POK stuff is released with ES/9000 as ESCON (when it was already obsolete, approx, 17mbytes/sec).

Then some POK engineers become involved with FCS and define a heavy-weight protocol that drastically reduces the native throughput, eventually released as FICON. The most recently published numbers I've found are a "peak I/O" z196 benchmark which got 2M IOPS using 104 FICON. About the same time a FCS was announced for E5-2600 blades that claimed over a million IOPS (two such native FCS have higher throughput than 104 FICON).

Further aggravating throughput&overhead is requirement for CKD DASD ... which haven't been made for decades, all being emulated on industry standard fixed-block disks.

channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender
FICON posts
https://www.garlic.com/~lynn/submisc.html#ficon
DASD, CKD, FBA, multi-track search, etc
https://www.garlic.com/~lynn/submain.html#dasd

--
virtualization experience starting Jan1968, online at home since Mar1970

Performance Predictor, IBM downfall, and new CEO

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Performance Predictor, IBM downfall, and new CEO
Date: 12 Feb, 2023
Blog: Facebook
Performance Predictor, IBM downfall, and new CEO

Science Center, besides doing CP40/CMS & CP67/CMS (virtual machine precursor to VM370), had also ported APL\360 to CMS for CMS\APL (redoing storage managmeent from 16kbyte swapped workspaces, to large virtual memory demand paged workspaces ... and adding APIs for system services like file i/o, enabling real world applications), invented GML in 1969 (after a decade morphs into ISO standard SGML, and after another decade morphs into HTML at CERN), and lots of perfromance tools ... including a system analytical throughput model implemented in CMS\APL. This was made available on (world-wide sales&marketing support) HONE system as the Performance Predictor (SEs could enter customer configuration and workload profiles and as "what-if" questions about configuration and/or workload changes.

I had long left IBM, but turn of century was brought into large financial outsourcing datacenter doing something like half of all US credit card processing (for banks and other financial institutions). They had something like 40+ mainframes @$30M constant rolling upgrades, none older than previous generation; all running 450K cobol statement application, number needed to finish batch settlement in the overnight batch window. They had large performance group that had been managing the care&feeding for decades. I used some other performance analysis technology (from science center days) and found 14% improvement.

There was another performance consultant from Europe that during IBM troubles of the early 90s (and unloading lots of stuff), had acquired the right to descendant of the performance predictor, ran it through a APL->C converter and was using it for large (IBM mainframe and non-IBM) datacenter performance consulting business ... who found a different 7% improvement ... total 21% improvement (savings on >$1B IBM mainframes).

trivia: the outsourcing company had been spun off 1992 from AMEX in the largest IPO up until that time ... several of the executives had previously reported to Gerstner (when he was president of AMEX).

This was when IBM had one of the largest losses in history of US companies and was being reorged into the 13 "baby blues" in preparation for breaking up the company.
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk (corp hdqtrs) asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts (however, before we get started, the board brings in a new CEO and reverses the breakup).

AMEX had been in competition with KKR for (private equity) LBO (reverse IPO) of RJR and KKR wins. KKR runs into trouble and hires away AMEX president to help with RJR
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

The IBM Board then hires former president of AMEX as new CEO, who reverses the breakup and uses some of the same tactics used at RJR (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
above some IBM related specifics from
https://www.amazon.com/Retirement-Heist-Companies-Plunder-American-ebook/dp/B003QMLC6K/

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
HONE (&/or APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
IBM down turn/fall
https://www.garlic.com/~lynn/submisc.html#ibmdownfall
Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions

some Performance Predictor & work on 450k statement cobol application
https://www.garlic.com/~lynn/2022f.html#3 COBOL and tricks
https://www.garlic.com/~lynn/2022.html#104 Mainframe Performance
https://www.garlic.com/~lynn/2021k.html#120 Computer Performance
https://www.garlic.com/~lynn/2021j.html#30 VM370, 3081, and AT&T Long Lines
https://www.garlic.com/~lynn/2021i.html#10 A brief overview of IBM's new 7 nm Telum mainframe CPU
https://www.garlic.com/~lynn/2021e.html#61 Performance Monitoring, Analysis, Simulation, etc
https://www.garlic.com/~lynn/2019c.html#80 IBM: Buying While Apathetaic
https://www.garlic.com/~lynn/2018d.html#2 Has Microsoft commuted suicide
https://www.garlic.com/~lynn/2017d.html#43 The Pentagon still uses computer software from 1958 to manage its contracts
https://www.garlic.com/~lynn/2015h.html#112 Is there a source for detailed, instruction-level performance info?
https://www.garlic.com/~lynn/2015c.html#65 A New Performance Model ?
https://www.garlic.com/~lynn/2011e.html#63 Collection of APL documents
https://www.garlic.com/~lynn/2009d.html#5 Why do IBMers think disks are 'Direct Access'?
https://www.garlic.com/~lynn/2008l.html#81 Intel: an expensive many-core future is ahead of us
https://www.garlic.com/~lynn/2008c.html#24 Job ad for z/OS systems programmer trainee
https://www.garlic.com/~lynn/2007u.html#21 Distributed Computing

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 12 Feb, 2023
Blog: Linkedin
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#81 IBM 4341
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2023.html#87 IBM San Jose
https://www.garlic.com/~lynn/2023.html#89 IBM San Jose

account of co-worker at science center responsible for internal network
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
echnology also used for the corporate sponsored BITNET
https://en.wikipedia.org/wiki/BITNET

... and EARN ... old email from former co-worker in France that did year at CSC, an assignment in Paris to setup EARN
https://www.garlic.com/~lynn/2001h.html#email840320

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

other MVS / VM370 SMP trivia:

Standard SMP MVS statement was that it only had 1.2-1.5 throughput of single processor. 158&168 multiprocessor configurations had processor cycle reduced by 10% (to account for cross-cache protocol chatter) so two processor hardware was only 1.8 times (i.e. 2*0.9) a single processor. Then throw in the significant MVS multiprocessor system overhead and you only get 1.2-1.5 times single processor.

After joining IBM, one of my hobbies was enhanced production operating systems for internal datacenters and the online world-wide sales&marketing HONE systems were long time customer. In the morph of CP67->VM370, they greatly simplified and/or dropped (including a lot of my stuff done as undergraduate as well as the multiprocessor support). I spent 1974 moving lots of the dropped CP67 features and performance to VM370. It was about this time that the US HONE datacenters were consolidated in palo alto ... with max configured loosely-coupled, single-system image (i.e. eight systems sharing large disk farm) with load-balancing and fall-over. I then retrofitted CP67 multiprocessor support to VM370 initially so HONE could add a 2nd processor to each systems (for 16 processors total in single-system image complex). I had further optimized the CP67 SMP support for super short pathlengths and played some games with "cache affinity" to improve cache hit rate ... SMP systems able to frequently hit twice throughput (offsetting hardware only running at 1.8 of single processor)

SMP/multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 12 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#81 IBM 4341
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2023.html#87 IBM San Jose
https://www.garlic.com/~lynn/2023.html#89 IBM San Jose
https://www.garlic.com/~lynn/2023.html#91 IBM 4341

IBM's 3033. "The Big One": IBM's 3033
https://www.ibm.com/ibm/history/exhibits/3033/3033_intro.html

early 70s, IBM had the "Future System" project, going to completely replace 370 and totally different. During the period, 370 efforts were being shutdown/killed (credited with giving 370 clone makers their market foothold). Then when FS implodes, there is mad rush to get stuff back in the 370 product pipelines, including kicking off the quick&dirty 3033&3081 efforts in parallel ... some more detail
http://www.jfsowa.com/computer/memo125.htm
https://people.computing.clemson.edu/~mark/fs.html

I had gotten sucked into a 16-processor 370 project and we had co-opted the 3033 processor engineers into working on it in their spare time (lot more interesting than remapping 168-3 logic to 20% faster chips. Everybody thot it was really great until somebody tells the head of POK that it could be decades before POK's favorite son operating system (MVS) had effective 16-way support. Then some of us were invited to never visit POK again and the 3033 processor engineers were directed to heads down on 3033 "only" (and no distractions) ... IBM doesn't ship a 16-processor mainframe until after the turn of the century.

Future System posts
https://www.garlic.com/~lynn/submain.html#futuresys
SMP/multiprocessor posts
https://www.garlic.com/~lynn/subtopic.html#smp

some recent bldg15 engineering 3033 posts
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#41 IBM 3081 TCM
https://www.garlic.com/~lynn/2022e.html#79 Enhanced Production Operating Systems
https://www.garlic.com/~lynn/2022d.html#66 VM/370 Turns 50 2Aug2022
https://www.garlic.com/~lynn/2022d.html#47 360&370 I/O Channels
https://www.garlic.com/~lynn/2021k.html#107 IBM Future System
https://www.garlic.com/~lynn/2021i.html#85 IBM 3033 channel I/O testing
https://www.garlic.com/~lynn/2019c.html#70 2301, 2303, 2305-1, 2305-2, paging, etc
https://www.garlic.com/~lynn/2019.html#83 The Sublime: Is it the same for IBM and Special Ops?
https://www.garlic.com/~lynn/2019.html#38 long-winded post thread, 3033, 3081, Future System
https://www.garlic.com/~lynn/2018f.html#93 ACS360 and FS
https://www.garlic.com/~lynn/2016f.html#85 3033

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 4341

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 4341
Date: 13 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#81 IBM 4341
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2023.html#87 IBM San Jose
https://www.garlic.com/~lynn/2023.html#89 IBM San Jose
https://www.garlic.com/~lynn/2023.html#91 IBM 4341
https://www.garlic.com/~lynn/2023.html#92 IBM 4341

Channel extender with full duplex t1 ... 1.5mbits/sec .... in house was quarter sec or less ... for 3277 had .086 hardware plus .20-.25 system response .... when they used best of my systems it was down to .11 system response. 3278/3274 hardware response was .3-.5 secs (3270 channel attached controllers for 300+ terminals)

Trivia in-house stl had 327x direct channel attached controllers across all channels with dasd for the 168-3 systems. The channel-extender box was several times faster than channel attached 327x controllers ... moving all 327x controllers to remote bldg ... met that the same 3270 traffic done with significantly reduced channel busy (thru fast channel extender box) ... which increased dasd thruput and improved overall system thruput by 10-20%. Some suggestion to use the channel extender boxes for all systems ... including all in-house 327x channel attached controllers (improving system thruput for all systems).

A similar installation was done for IBM Boulder (used T1 optical infrared modem)

By comparison MVS was typically multiple seconds system response.

1980 was start of studies showing .25 sec response (or better) showed improved productivity ... however you needed on of my highly optimized vn370 systems with .11 system response plus channel attached 3270 controller with 3277 terminals that had .086 hardware response. For 3278 they moved lots electronics back into controller (to reduce manufacturing cost) ... but it significantly increased coax protocol chatter latency ... so hardware response increased to .3-.5secs

trivia: when 3278 came out (with the enormous protocol chatter latency) ... letter was written to 3278 product administrator about it being much worse device for interactive computing. Eventually he responded that 3278 wasn't designed for interactive computing but "data entry" (i.e. electronic keypunch). note: MVS system response was so bad that MVS users didn't even notice the difference with channel attached 3270 controllers between 3277 and 3278

channel extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

3277 versis 3278 hardware response posts
https://www.garlic.com/~lynn/2023.html#2 big and little, Can BCD and binary multipliers share circuitry?
https://www.garlic.com/~lynn/2022h.html#96 IBM 3270
https://www.garlic.com/~lynn/2022c.html#68 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022b.html#123 System Response
https://www.garlic.com/~lynn/2022b.html#110 IBM 4341 & 3270
https://www.garlic.com/~lynn/2022b.html#33 IBM 3270 Terminals
https://www.garlic.com/~lynn/2021j.html#74 IBM 3278
https://www.garlic.com/~lynn/2021i.html#69 IBM MYTE
https://www.garlic.com/~lynn/2021c.html#0 Colours on screen (mainframe history question) [EXTERNAL]
https://www.garlic.com/~lynn/2021.html#84 3272/3277 interactive computing
https://www.garlic.com/~lynn/2019e.html#28 XT/370
https://www.garlic.com/~lynn/2018f.html#54 PROFS, email, 3270
https://www.garlic.com/~lynn/2018d.html#32 Walt Doherty - RIP
https://www.garlic.com/~lynn/2017e.html#26 [CM] What was your first home computer?
https://www.garlic.com/~lynn/2017d.html#25 ARM Cortex A53 64 bit
https://www.garlic.com/~lynn/2016e.html#51 How the internet was invented
https://www.garlic.com/~lynn/2016d.html#42 Old Computing
https://www.garlic.com/~lynn/2016c.html#8 You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
https://www.garlic.com/~lynn/2016.html#15 Dilbert ... oh, you must work for IBM
https://www.garlic.com/~lynn/2015.html#38 [CM] IBM releases Z13 Mainframe - looks like Batman
https://www.garlic.com/~lynn/2014h.html#106 TSO Test does not support 65-bit debugging?
https://www.garlic.com/~lynn/2014g.html#26 Fifty Years of BASIC, the Programming Language That Made Computers Personal
https://www.garlic.com/~lynn/2014g.html#23 Three Reasons the Mainframe is in Trouble
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol

--
virtualization experience starting Jan1968, online at home since Mar1970

The Ladder of Incompetence: 5 Reasons We Promote the Wrong People

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: The Ladder of Incompetence: 5 Reasons We Promote the Wrong People
Date: 13 Feb, 2023
Blog: Facebook
The Ladder of Incompetence: 5 Reasons We Promote the Wrong People
https://news.clearancejobs.com/2023/02/07/the-ladder-of-incompetence-5-reasons-we-promote-the-wrong-people/

.... or "why heads roll uphill" ... and from John Boyd:
There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question.
... Learson trying to block bureaucrats and careerists (and MBAs) destroying Watsons' legacy
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/

John Boyd posts & web URLs
https://www.garlic.com/~lynn/subboyd.html
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

misc past posts mentioning heads roll uphill
https://www.garlic.com/~lynn/2022g.html#68 Datacenter Vulnerability
https://www.garlic.com/~lynn/2022g.html#65 IBM DPD
https://www.garlic.com/~lynn/2022.html#29 IBM HONE
https://www.garlic.com/~lynn/2022.html#4 GML/SGML/HTML/Mosaic
https://www.garlic.com/~lynn/2017h.html#19 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2013.html#8 Is Microsoft becoming folklore?
https://www.garlic.com/~lynn/2012h.html#54 How will mainframers retiring be different from Y2K?
https://www.garlic.com/~lynn/2012b.html#76 IBM Doing Some Restructuring?
https://www.garlic.com/~lynn/2010i.html#15 Idiotic programming style edicts
https://www.garlic.com/~lynn/2008m.html#41 IBM--disposition of clock business
https://www.garlic.com/~lynn/2007.html#26 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
https://www.garlic.com/~lynn/2007.html#22 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista
https://www.garlic.com/~lynn/2001.html#29 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2000.html#91 Ux's good points.

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM San Jose

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM San Jose
Date: 13 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#71 IBM 4341
https://www.garlic.com/~lynn/2023.html#72 IBM 4341
https://www.garlic.com/~lynn/2023.html#73 IBM 4341
https://www.garlic.com/~lynn/2023.html#74 IBM 4341
https://www.garlic.com/~lynn/2023.html#76 IBM 4341
https://www.garlic.com/~lynn/2023.html#81 IBM 4341
https://www.garlic.com/~lynn/2023.html#86 IBM San Jose
https://www.garlic.com/~lynn/2023.html#87 IBM San Jose
https://www.garlic.com/~lynn/2023.html#89 IBM San Jose
https://www.garlic.com/~lynn/2023.html#91 IBM 4341
https://www.garlic.com/~lynn/2023.html#92 IBM 4341
https://www.garlic.com/~lynn/2023.html#93 IBM 4341

... offsite bldg was 300+ terminals ... the three 56kbit links for 50 terminals ... the equivalent would have been something like 6*3 or 18 56kbit links for 300+ terminals.

... 37x5 & T1 support; I had started with HSDT in early 80s for T1 and faster computer links, however IBM 37x5 boxes didn't support more than 56kbit. In mid-80s, the communication group prepared a report for corporate executive committee why customers wouldn't be needing T1 for another decade or so. They had done analysis of customer 37x5 "fat pipes" (multiple parallel 56kbit links treated as single logical link), showing number of customer 2, 3, 4, 5, etc, 56kbit link "fat pipes" ... dropping to zero by six links. What they didn't know (or didn't want to tell the executive committee) was typical telco tariff for T1 was about the same as 6 or 7 56kbit links. When customers got to 300kbit or so aggregate ... they just switched to full T1 (1.5mbits) with non-IBM controllers. At the time, did trivial survey that found 200 customers with full T1 and non-IBM controllers.

Finally communication group forced to come out with full (terrestrial) T1 support, the "3737" which had a whole boatload of motorola 68k processors and memory with a mini-VTAM that simulated CTCA link to host VTAM. The problem was that host VTAM had an embedded "window pacing" algorithm that would hit limit of outstanding RUs well before returning ACKs started arriving (even short-haul terrestrial T1). The pseudo CTCA/VTAM (in 68k processors) would immediately signal "ACK" to the host processor (even before transmitting to remote 3737) ... trying to force host VTAM to continue transmitting RUs. Even with all the memory and processors, the 3737 peaked out around 2mbits/secs (US T1 full-duplex was 3mbits/sec aggregate, EU T1 full-duplex was 4mbits/sec aggregate). It was obvious that 3737 wasn't able to handle long-haul terrestrial T1 (with longer round-trip latency) ... and no chance at all of handling satellite T1 ... single hop round-trip 88,000 miles or at least .47secs and double hop was at least .94secs (double hop was also problem even at 56kbits, STL once tried to double hop satellite with Hursley ... VM370/RSCS worked fine but MVS/JES2 couldn't hack it).

some old 3737 related email
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2018f.html#email880725
https://www.garlic.com/~lynn/2011g.html#email881005

related, circa 1990 study found that host VTAM LU6.2 had 160k instruction pathlength and 15 buffer copies ... compared to UNIX TCP/IP had 5k instruction pathlength and five buffer copies.

other TCP/IP trivia: communication group fiercely fighting off release of mainframe TCP/IP support, but when they lost they changed tactics and claimed they had corporate responsibility for everything that crossed datacenter walls, and it had to be shipped through them ... what shipped got 44kbytes/sec aggregate throughput using nearly whole 3090 processor. I did the enhancements for RFC1044 and in some tuning tests at Cray Research, between 4341 and a Cray, got sustained 4341 channel throughput using only modest amount of 4341 (something like 500 times improvement in bytes moved per instruction executed). Later in the 90s, the communication group hired a silicon valley consultant to implement TCP/IP support directly in VTAM. Initially he demoed TCP/IP running significantly faster than LU6.2. He was then told that everybody "knows" that a "proper" TCP/IP implementation is much slower than LU6.2 and they would only be paying for a "proper" implementation.

hsdt posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
RFC1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044
channel-extender posts
https://www.garlic.com/~lynn/submisc.html#channel.extender

--
virtualization experience starting Jan1968, online at home since Mar1970

Mainframe Assembler

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Mainframe Assembler
Date: 13 Feb, 2023
Blog: Facebook
I had bunch of archived stuff from undergraduate days in the 60s (lots of assembler, 1401 mpio rewrite for 360/30, lots of os/360 stuff, and significant rewrites of early cp67 & cms, as brash youngster took pride in rearranging code in different CP67 parts to have things happen in zero instructions) and science center days in the early 70s, on 800bpi ... then copied to 1600bpi and adding more stuff. I then copied it to 6250bpi and after research moved up the hill to Almaden ... copied to 3480 cartridges ... including 60s&70s archived stuff on three replicated 3480 cartridge copies ... all in the Almaden tape library. Then Almaden was having operational problems where random tapes were being mounted for scratch ... scraching ten of my tapes ... including triple redundant copies of 60s/70s archive.

Shortly before Almaden operations started mounting random tapes as scratch ... Melinda asked if I had early 70s copy of the incremental source update process ... was able to pull them it all off and email, melinda ref:
https://www.leeandmelindavarian.com/Melinda#VMHist
archived email from that exchange
https://www.garlic.com/~lynn/2006w.html#email850906
https://www.garlic.com/~lynn/2006w.html#email850906b
https://www.garlic.com/~lynn/2006w.html#email850908

In the early 80s, I wanted to demonstrate REX(X) was not just another pretty scripting language (before renamed REXX and released to customers). I decided on redoing a large assembler application (dump reader & fault analysis) in REX with ten times the function and ten times the performance, working half time over three months elapsed. I finished early so started writing automated script that searched for most common failure signatures). It also included a pseudo dis-assembler ... converting storage areas into instruction sequences. I had thought that it would be released to customers but for what ever reasons it wasn't (this was in the OCO-wars period) ... but I finally got permission to give talks on the implementation at user group meetings ... and within a few months similar implementations started showing up at customer shops.

side-bar: In aug1976, TYMSHARE started offering their CMS-based online computer conferencing facility, "free" to the user group SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as "VMSHARE" ...archive here (including some amount of OCO-war discussions)
http://vm.marist.edu/~vmshare

some acp/tpf res trivia:

late 80s, my wife did short stint as chief architect for Amadeus (built off the old 370/195 Eastern Air System/ONE) ... however she backed the Europeans in the use of x.25 (instead of SNA). The communication group then got her replaced ... but it didn't do them much good because Europe went with x.25 anyway ... and replaced their replacement.

... and in the mid 90s, after having left IBM, was asked into largest airline res system ... to look at the ten things they couldn't do ... first starting with ROUTES (finding flts from origin to destination) ... got a full machine readable copy of the OAG (all commercial airline flts for all airlines and all airports in the world). I came back two months later with (RS/6000, unix, c-language) implementation that did all their impossible things. Initial code did all the stuff they did then, 100 times faster (than ACP/TPF implementation, which in part still suffered from 60s technology trade-offs); then adding implementation of all the additional things, reduced it to only ten times faster. Sized, ten rack mount RS/6000-990s could handle all ROUTES requests for all commercial airlines in the world. Less than decade later (2002), cellphone (XSCALE) processor had MIP rate nearly aggregate of those ten 990s (two? or was it three? racks).

dumprx posts
https://www.garlic.com/~lynn/submain.html#dumprx

sold archived amadeus posts
https://www.garlic.com/~lynn/2022h.html#97 IBM 360
https://www.garlic.com/~lynn/2022h.html#10 Google Cloud Launches Service to Simplify Mainframe Modernization
https://www.garlic.com/~lynn/2022f.html#113 360/67 Virtual Memory
https://www.garlic.com/~lynn/2022c.html#75 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2021b.html#0 Will The Cloud Take Down The Mainframe?
https://www.garlic.com/~lynn/2017d.html#0 IBM & SABRE
https://www.garlic.com/~lynn/2016d.html#48 PL/I advertising
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2014c.html#69 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
https://www.garlic.com/~lynn/2012n.html#41 System/360--50 years--the future?
https://www.garlic.com/~lynn/2012j.html#5 Interesting News Article
https://www.garlic.com/~lynn/2011.html#17 Looking for a real Fortran-66 compatible PC compiler (CP/M or DOS or Windows, doesn't matter)
https://www.garlic.com/~lynn/2010n.html#16 Sabre Talk Information?
https://www.garlic.com/~lynn/2010g.html#29 someone smarter than Dave Cutler
https://www.garlic.com/~lynn/2007k.html#72 The top 10 dead (or dying) computer skills

some archived ROUTES posts
https://www.garlic.com/~lynn/2022h.html#58 Model Mainframe
https://www.garlic.com/~lynn/2022c.html#76 lock me up, was IBM Mainframe market
https://www.garlic.com/~lynn/2022b.html#18 Channel I/O
https://www.garlic.com/~lynn/2021i.html#77 IBM ACP/TPF
https://www.garlic.com/~lynn/2021i.html#76 IBM ITPS
https://www.garlic.com/~lynn/2021f.html#8 Air Traffic System
https://www.garlic.com/~lynn/2021b.html#6 Airline Reservation System
https://www.garlic.com/~lynn/2021.html#71 Airline Reservation System
https://www.garlic.com/~lynn/2016f.html#109 Airlines Reservation Systems
https://www.garlic.com/~lynn/2016.html#58 Man Versus System
https://www.garlic.com/~lynn/2015d.html#84 ACP/TPF
https://www.garlic.com/~lynn/2013g.html#87 Old data storage or data base
https://www.garlic.com/~lynn/2011e.html#8 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011c.html#42 If IBM Hadn't Bet the Company
https://www.garlic.com/~lynn/2010j.html#53 Article says mainframe most cost-efficient platform
https://www.garlic.com/~lynn/2010b.html#73 Happy DEC-10 Day
https://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Computer Conferencing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Computer Conferencing
Date: 14 Feb, 2023
Blog: Facebook
Note precursor to the internal forums (and social media) ... AUG1976, TYMSHARE
https://en.wikipedia.org/wiki/Tymshare
started offering their CMS-based online computer conferencing service "free" to the mainframe user group SHARE
https://en.wikipedia.org/wiki/SHARE_(computing)
as VMSHARE ... archives here
http://vm.marist.edu/~vmshare

I cut a deal with Tymshare to get a monthly tape dump of all VMSHARE (and later PCSHARE) files to put up on internal systems (including online world-wide sales and marketing support, HONE systems) and the internal network. The biggest problem I had was with IBM lawyers who were concerned that internal employees exposed to customer information, would be contaminated. I was then blamed for online computer conferencing in the late 70s and early 80s on the internal network (larger than arpanet/internet from just about the beginning until some time mid/late 80s). Folklore is that when corporate executive committee was told about it, 5of6 wanted to fire me. It really took off spring of 1981 after distributing a trip report of visit to Jim Gray at Tandem. From ibmjarg
https://comlay.net/ibmjarg.pdf
Tandem Memos - n. Something constructive but hard to control; a fresh of breath air (sic). That's another Tandem Memos. A phrase to worry middle management. It refers to the computer-based conference (widely distributed in 1981) in which many technical personnel expressed dissatisfaction with the tools available to them at that time, and also constructively criticized the way products were [are] developed. The memos are required reading for anyone with a serious interest in quality products. If you have not seen the memos, try reading the November 1981 Datamation summary.

... snip ...

Officially sanctioned software and forums were then created with "moderators" ... including VMTOOLS and PCTOOLS.

other trivia: Before IBM/PC 3270 emulation (and screen scrapping), the author of VMSG (email client, was also used by PROFS) also implemented parasite/story ... CMS app that leveraged VM370 PVM psuedo 3270 (there was also a PVM->CCDN gateway) and a HLLAPI-like programming language. Old archive post with a "story" that implements an automated put bucket retriever (aka automated login into retain and download)
https://www.garlic.com/~lynn/2001k.html#36
other automated "story"
https://www.garlic.com/~lynn/2001k.html#35

OS/2 trivia, old email about somebody in OS/2 tracking me down ... had been told I was responsible for VM370 scheduling and resource management
https://www.garlic.com/~lynn/2003f.html#email871124
https://www.garlic.com/~lynn/2007i.html#email871204
https://www.garlic.com/~lynn/2007i.html#email871204b

IBM/PC & DOS trivia: on announcement, I ordered one on the employee discount program ... it took so long to arrive ... that the quantity one street price and dropped below the employee discount that I paid (it would have been cheaper and faster to not have used the employee discount program).

commercial virtual machine, online service bureau posts
https://www.garlic.com/~lynn/submain.html#timeshare
online computer conferencing posts
https://www.garlic.com/~lynn/subnetwork.html#cmc
HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
dynamic adaptive resource management posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
virtualization experience starting Jan1968, online at home since Mar1970

'Pay your fair share': Biden's talk on taxes echoes our findings The State of the Union speech hammered on tax inequality

From: Lynn Wheeler <lynn@garlic.com>
Subject: 'Pay your fair share': Biden's talk on taxes echoes our findings The State of the Union speech hammered on tax inequality
Date: 14 Feb, 2023
Blog: Facebook
'Pay your fair share': Biden's talk on taxes echoes our findings The State of the Union speech hammered on tax inequality. It's what we've been reporting on for years.
https://publicintegrity.org/inequality-poverty-opportunity/taxes/pay-fair-share-taxes-biden-state-of-the-union/

prior administration claims about tax cuts for corporations and the wealthy were fabrication

On The Deficit, GOP Has Been Playing Us All For Suckers
https://www.forbes.com/sites/stancollender/2018/04/15/on-the-deficit-gop-has-been-playing-us-all-for-suckers/
U.S. Cash Repatriation Plunges 50%, Defying Trump's Tax Forecast
https://www.bloomberg.com/news/articles/2018-12-19/u-s-offshore-repatriated-cash-fell-almost-50-in-third-quarter
You paid taxes. These corporations didn't.
https://publicintegrity.org/business/taxes/trumps-tax-cuts/you-paid-taxes-these-corporations-didnt/"

claims that corporations would use the tax cuts for infrastructure investments and employee retention and bonuses were also fabrication ... around 98% went for executive bonuses, stock dividends and stock buybacks.

Corporations Say Publicly They'll Pocket the Tax Cut, But Republicans Aren't Listening
https://theintercept.com/2017/12/19/tax-bill-corporate-cut-stock-buyback-republican/
US firms will now focus on stock buybacks after tax cuts, David Rubenstein says
https://www.cnbc.com/2018/01/24/us-firms-will-now-focus-on-stock-buybacks-after-tax-cuts-david-rubenstein-says.html
Share buyback machine now in overdrive -- dropping a strong hint at what CEOs plan to do with tax savings
https://www.marketwatch.com/story/share-buybacks-spike-dropping-a-strong-hint-at-what-ceos-plan-to-do-with-tax-savings-2017-12-08
How Much Can Buybacks Rise on Tax Cuts? This Estimate Says 70%
https://www.bloomberg.com/news/articles/2018-01-03/how-much-can-buybacks-rise-on-tax-cuts-this-estimate-says-70
Companies buying back their own shares is the only thing keeping the stock market afloat right now
https://www.cnbc.com/2018/07/02/corporate-buybacks-are-the-only-thing-keeping-the-stock-market-afloat.html
Stockman (Reagan's budget director in the 80s), "The Great Deformation: The Corruption of Capitalism in America"
https://www.amazon.com/Great-Deformation-Corruption-Capitalism-America-ebook/dp/B00B3M3UK6/

Going back further, 2002 congress lets the fiscal responsibility act lapse (gov spending can't exceed tax revenue, on its way to eliminating all federal debt). By 2005, US Comptroller General was including in speeches that nobody in congress was capable of middle school arithmetic for how badly their were savaging the budget. CBO 2010 report for 2003-2009, that tax revenue was cut $6T and spending increased by $6T for $12T deficit (compared to a fiscal responsible budget), first time taxes were cut to not pay for two wars. Sort of confluence of Federal Reserve and Too Big To Fail needed huge federal debt, special interests wanted huge tax cut, and Military-Industrial Complex wanted huge spending increase and perpetual wars.

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
fiscal responsibility act posts
https://www.garlic.com/~lynn/submisc.html#fiscal.responsibility.act
tax fraud, tax evasion, tax loopholes, tax avoidance, tax havens posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
Comptroller General posts
https://www.garlic.com/~lynn/submisc.html#comptroller.general
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html##stock.buyback
Military-Indusrial Complex posts
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
Perpetual War posts
https://www.garlic.com/~lynn/submisc.html#perpetual.war
Too Big To Fail (Too Big To Prosecute, Too Big To Jail)
https://www.garlic.com/~lynn/submisc.html#too-big-to-fail
Fed chairmen, Federal Reserve, etc posts
https://www.garlic.com/~lynn/submisc.html#fed.chairman

--
virtualization experience starting Jan1968, online at home since Mar1970

Online Computer Conferencing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Online Computer Conferencing
Date: 14 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#97 Online Computer Conferencing

Opel's obit ...
https://www.pcworld.com/article/243311/former_ibm_ceo_john_opel_dies.html
According to the New York Times, it was Opel who met with Bill Gates, CEO of the then-small software firm Microsoft, to discuss the possibility of using Microsoft PC-DOS OS for IBM's about-to-be-released PC. Opel set up the meeting at the request of Gates' mother, Mary Maxwell Gates. The two had both served on the National United Way's executive committee.

When Boca claimed that they weren't doing software for ACORN (unannounced IBM/PC) there was IBM group in silicon valley formed to do PC software ... every month group checked that Boca wasn't doing software. Then all of sudden, if you wanted to do software for ACORN you had to move to Boca. Boca wanted to not have internal IBM operations competing with them on ACORN (internal politics) ... even to outsourcing with external organization where Boca controlled the contract interface.


... snip ...

before msdos
https://en.wikipedia.org/wiki/MS-DOS
there was Seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before Seattle computer, there was cp/m
https://en.wikipedia.org/wiki/CP/M
before developing cp/m, Kildall worked on cp67/cms at npg (gone 404, but lives on at the wayback machine)
https://web.archive.org/web/20071011100440/http://www.khet.net/gmc/docs/museum/en_cpmName.html
npg
https://en.wikipedia.org/wiki/Naval_Postgraduate_School

trivia: CP67/CMS was precursor to personal computing; Some of the MIT 7094/CTSS people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
went to the 5th flr, project mac, and MULTICS.
https://en.wikipedia.org/wiki/Multics

Others went to the 4th flr, IBM Cambridge Science Center, did virtual machine CP40/CMS (on 360/40 with hardware mods for virtual memory, morphs into CP67/CMS when 360/67 standard with virtual memory becomes available, precursor to vm370), online and performance apps, CTSS RUNOFF redid for CMS as SCRIPT, GML invented at science center in 1969 (and GML tag processing added to SCRIPT, a decade later GML morphs into ISO SGML and after another decade morphs into HTML at CERN), networking, etc.
https://en.wikipedia.org/wiki/CP/CMS
https://en.wikipedia.org/wiki/Conversational_Monitor_System

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet
GML/SGML/HTML
https://www.garlic.com/~lynn/submain.html#sgml

posts with ctss, cp40/cms, cp67/cms, cp/m, ms/dos
https://www.garlic.com/~lynn/2023.html#30 IBM Change
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#2 VM/370
https://www.garlic.com/~lynn/2022f.html#107 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#72 IBM/PC
https://www.garlic.com/~lynn/2022f.html#17 What's different, was Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022f.html#7 Vintage Computing
https://www.garlic.com/~lynn/2022e.html#44 IBM Chairman John Opel
https://www.garlic.com/~lynn/2022d.html#90 Short-term profits and long-term consequences -- did Jack Welch break capitalism?
https://www.garlic.com/~lynn/2022d.html#44 CMS Personal Computing Precursor
https://www.garlic.com/~lynn/2022c.html#42 Why did Dennis Ritchie write that UNIX was a modern implementation of CTSS?
https://www.garlic.com/~lynn/2022c.html#29 Unix work-alike
https://www.garlic.com/~lynn/2016d.html#33 The Network Nation, Revised Edition
https://www.garlic.com/~lynn/2016b.html#17 IBM Destination z - What the Heck Is JCL and Why Does It Look So Funny?
https://www.garlic.com/~lynn/2012f.html#41 Hi, Does any one knows the true origin of the usage of the word bug in computers to design a fault?
https://www.garlic.com/~lynn/2012c.html#24 Original Thinking Is Hard, Where Good Ideas Come From
https://www.garlic.com/~lynn/2012b.html#6 Cloud apps placed well in the economic cycle
https://www.garlic.com/~lynn/2012.html#100 The PC industry is heading for collapse
https://www.garlic.com/~lynn/2012.html#96 Has anyone successfully migrated off mainframes?

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
Date: 15 Feb, 2023
Blog: Facebook
IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
https://www.hpcwire.com/2023/02/08/ibm-introduces-vela-cloud-ai-supercomputer-powered-by-intel-nvidia/
Specs first. Each of Vela's nodes is equipped with dual Intel Xeon "Cascade Lake" CPUs (notably forgoing IBM's own Power10 chips, introduced in 2021), octuple Nvidia A100 (80GB) GPUs, 1.5TB of memory and four 3.2TB NVMe drives. In its [(blog post] announcing the system, IBM said the nodes are networked via "multiple 100G network interfaces," and that each node is connected to a different top-of-rack switch, each of which, in turn, is connected to four different spine switches, ensuring both strong cross-rack bandwidth and insulation from component failure. Vela is "natively integrated" with IBM Cloud's virtual private cloud (VPC) environment.

... snip ...

for going on a couple decades, the large cloud operators have claimed that they assemble their servers at 1/3rd the cost of brand name servers. A few years ago, server chip vendors had press that they were shipping at least half their product directly to the cloud megadatacenters (where they did the assembly) ... shortly later IBM sells off its (intel) server business (to lenovo, having previously sold off the IBM/PC business to lenovo).

More than decade ago, there were articles how it was possible, using a credit card, to have large cloud operator spin-up an ondemand supercomputer (from server blades in their megadatacenters) ... which would rank in the top 40 supercomputers in the world (there were also reduced rates if you would preschedule spin-up during off-peak periods)

IBM viewed selling (as much) hardware (as possible) as profit, large cloud operator with possibly dozens of megadatacenters around the world, each megadatacenter with half million or more systems, view hardware, power, cooling, manual operations, etc ... all as costs (to be optimized, with all kind of trade-offs, the cost of all new hardware replacement might be offset by the reduction in power).

cloud megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ROLM

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ROLM
Date: 16 Feb, 2023
Blog: Facebook
I had HSDT project in early 80s, T1 and faster computer links. Not long after IBM bought ROLM ... I was told that I needed to show some IBM content. Even for as slow as T1, the only IBM content I could find was the FSD "Zirpel" T1 card for the Series/1. The problem then was when I went to order some Series/1s, I was told that after IBM bought ROLM ... ROLM had ordered a year's backlog of S/1s (ROLM was Data General boxes). The head of ROLM datacenter was former IBMer I had known some years earlier before they left IBM. In return for a couple of their Series/1 orders I was suppose to help them upgrade their development & test from 56kbit/sec links ... which was taking 24hrs to download new test systems into Data General boxes ... to T1 links.

Some years later ... after leaving IBM ... Siemens had acquired ROLM ... and I was dealing with the head of Siemens chip tech who had office out at the ROLM campus ... I had designed a new secure chip for electronic financial transactions and it was going to be fab'ed at a new Siemens secure plant in Germany. While meeting with him, Siemens spun off the chips stuff as Infineon, he became president (and got to ring the bell at NYSE) ... and Infineon had 3 new bldgs built at the intersection of 101 & main in San Jose.

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

some rolm (possibly some infineon) and/or zirpel posts
https://www.garlic.com/~lynn/2022f.html#111 IBM Downfall
https://www.garlic.com/~lynn/2021j.html#62 IBM ROLM
https://www.garlic.com/~lynn/2021j.html#12 Home Computing
https://www.garlic.com/~lynn/2021f.html#2 IBM Series/1
https://www.garlic.com/~lynn/2018b.html#9 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017h.html#99 Boca Series/1 & CPD
https://www.garlic.com/~lynn/2016h.html#26 "I used a real computer at home...and so will you" (Popular Science May 1967)
https://www.garlic.com/~lynn/2016d.html#27 Old IBM Mainframe Systems
https://www.garlic.com/~lynn/2015e.html#84 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2015e.html#83 Inaugural Podcast: Dave Farber, Grandfather of the Internet
https://www.garlic.com/~lynn/2014f.html#24 Before the Internet: The golden age of online services
https://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
https://www.garlic.com/~lynn/2013j.html#43 8080 BASIC
https://www.garlic.com/~lynn/2013j.html#37 8080 BASIC
https://www.garlic.com/~lynn/2013g.html#71 DEC and the Bell System?
https://www.garlic.com/~lynn/2009j.html#4 IBM's Revenge on Sun
https://www.garlic.com/~lynn/2006n.html#25 sorting was: The System/360 Model 20 Wasn't As Bad As All That

other posts mentioning infineon:
https://www.garlic.com/~lynn/2022f.html#112 IBM Downfall
https://www.garlic.com/~lynn/2022f.html#68 Security Chips and Chip Fabs
https://www.garlic.com/~lynn/2022b.html#103 AADS Chip Strawman
https://www.garlic.com/~lynn/2021j.html#41 IBM Confidential
https://www.garlic.com/~lynn/2018b.html#11 Important US technology companies sold to foreigners
https://www.garlic.com/~lynn/2017d.html#10 Encryp-xit: Europe will go all in for crypto backdoors in June
https://www.garlic.com/~lynn/2014h.html#8 Demonstrating Moore's law
https://www.garlic.com/~lynn/2014g.html#41 Special characters for Passwords
https://www.garlic.com/~lynn/2013o.html#80 "Death of the mainframe"
https://www.garlic.com/~lynn/2010e.html#3 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2010d.html#34 "Unhackable" Infineon Chip Physically Cracked
https://www.garlic.com/~lynn/2010d.html#31 Michigan firm sues bank over theft of $560,000
https://www.garlic.com/~lynn/2010d.html#21 Credit card data security: Who's responsible?
https://www.garlic.com/~lynn/2010d.html#7 "Unhackable" Infineon Chip Physically Cracked - PCWorld
https://www.garlic.com/~lynn/2010c.html#61 Engineer shows how to crack a 'secure' TPM chip
https://www.garlic.com/~lynn/2009o.html#66 Need for speedy cryptography
https://www.garlic.com/~lynn/2002h.html#9 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002g.html#56 Siemens ID Device SDK (fingerprint biometrics) ???
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security requested

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ROLM

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ROLM
Date: 16 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM

AMEX was in competition with KKR for LBO (private equity, reverse IPO) take-over of RJR and KKR won. They then ran into trouble and hired away AMEX President (Gerstner) to help.
https://en.wikipedia.org/wiki/Barbarians_at_the_Gate:_The_Fall_of_RJR_Nabisco

IBM has one of the largest losses in history of US companies and was being reorged into 13 "baby blues" in preparation for breaking up the company (Dec1992, "How IBM Was Left Behind")
https://web.archive.org/web/20101120231857/http://www.time.com/time/magazine/article/0,9171,977353,00.html
may also work
https://content.time.com/time/subscriber/article/0,33009,977353-1,00.html

we had already left IBM, but we get a call from the bowels of Armonk (corp hdqtrs) asking if we could help with breakup of the company. Lots of business units were using supplier contracts in other units via MOUs. After the breakup, all of these contracts would be in different companies ... all of those MOUs would have to be cataloged and turned into their own contracts.

before we get started, the IBM Board then hires the former president of AMEX as new CEO, who reverses the breakup and uses some of the same tactics used at RJR (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20181019074906/http://www.ibmemployee.com/RetirementHeist.shtml
above some IBM related specifics from
https://www.amazon.com/Retirement-Heist-Companies-Plunder-American-ebook/dp/B003QMLC6K/

Trivia, also in 1992, AMEX had spun-off much of its dataprocessing and financial outsourcing businesses as "First Data" ... in the largest IPO up until that time. After turn of the century, I'm doing a stint in FDC hdqtrs and some of the executives had previously reported to Gerstner

Gerstner posts
https://www.garlic.com/~lynn/submisc.html#gerstner
private-equity posts
https://www.garlic.com/~lynn/submisc.html#private.equity
pension posts
https://www.garlic.com/~lynn/submisc.html#pensions
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM ROLM

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM ROLM
Date: 17 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#101 IBM ROLM
https://www.garlic.com/~lynn/2023.html#102 IBM ROLM

HSDT was having periodic skirmishes with communication group over T1&faster mainframe links (and their lack of it), mid-80s was having some hardware built on the other side of the Pacific ... friday before a visit, I get an announcement from them about new internal IBM forum about communications with the following definition:
low-speed: 9.6kbits/sec, medium speed: 19.2kbits/sec, high-speed: 56kbits/sec, very high-speed: 1.5mbits/sec

monday morning on wall of conference room on the other side of pacific, there were these definitions:
low-speed: <20mbits/sec, medium speed: 100mbits/sec, high-speed: 200mbits-300mbits/sec, very high-speed: >600mbits/sec

The communication group also prepared a report for corporate executive committee why customers wouldn't be needing T1 until well into the 90s. They had done analysis of customer 37x5 "fat pipes" (multiple parallel 56kbit links treated as single logical link), showing number of customer 2, 3, 4, 5, etc, 56kbit link "fat pipes" ... dropping to zero by 6or7 links. What they didn't know (or didn't want to tell the executive committee) was typical telco tariff for T1 was about the same as 6 or 7 56kbit links. When customers got to 300kbit or so aggregate ... they just switched to full T1 (1.5mbits) with non-IBM controllers. At the time, did trivial survey that found 200 customers with full T1 and non-IBM controllers.

Finally communication group forced to come out with full (terrestrial) T1 support, the "3737" which had a whole boatload of motorola 68k processors and memory with a mini-VTAM that simulated CTCA link to host VTAM. The problem was that host VTAM had an embedded "window pacing" algorithm that would hit limit of outstanding RUs well before returning ACKs started arriving back (even short-haul terrestrial T1), suspending transmission. The pseudo CTCA/VTAM (in 68k processors) would immediately signal "ACK" to the host processor (even before transmitting to remote 3737) ... trying to force host VTAM to continue transmitting. Even with all the memory and processors, the 3737 peaked out around 2mbits/secs (US T1 full-duplex was 3mbits/sec aggregate, EU T1 full-duplex was 4mbits/sec aggregate). It was obvious that 3737 wasn't able to handle long-haul terrestrial T1 (with longer round-trip latency) ... and no chance at all of handling satellite T1 ... single hop round-trip 88,000 miles or at least .47secs and double hop was at least .94secs (double hop was also problem even at 56kbits, STL once tried to double hop satellite with Hursley ... VM370/RSCS worked fine but MVS/JES2 couldn't hack it).

some old 3737 related email
https://www.garlic.com/~lynn/2011g.html#email880130
https://www.garlic.com/~lynn/2011g.html#email880606
https://www.garlic.com/~lynn/2018f.html#email880715
https://www.garlic.com/~lynn/2018f.html#email880725
https://www.garlic.com/~lynn/2011g.html#email881005

HSDT posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt
dumb terminal emulation posts
https://www.garlic.com/~lynn/subnetwork.html#terminal
IBM demise of wild ducks, downturn, downfall, breakup posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

some recent posts mentioning 3737
https://www.garlic.com/~lynn/2023.html#95 IBM San Jose
https://www.garlic.com/~lynn/2022e.html#33 IBM 37x5 Boxes
https://www.garlic.com/~lynn/2022c.html#80 Peer-Coupled Shared Data
https://www.garlic.com/~lynn/2022c.html#14 IBM z16: Built to Build the Future of Your Business
https://www.garlic.com/~lynn/2021j.html#32 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#31 IBM Downturn
https://www.garlic.com/~lynn/2021j.html#16 IBM SNA ARB
https://www.garlic.com/~lynn/2021h.html#49 Dynamic Adaptive Resource Management
https://www.garlic.com/~lynn/2021d.html#14 The Rise of the Internet
https://www.garlic.com/~lynn/2021c.html#97 What's Fortran?!?!
https://www.garlic.com/~lynn/2021c.html#83 IBM SNA/VTAM (& HSDT)

--
virtualization experience starting Jan1968, online at home since Mar1970

XTP, OSI & LAN/MAC

From: Lynn Wheeler <lynn@garlic.com>
Subject: XTP, OSI & LAN/MAC
Date: 17 Feb, 2023
Blog: Facebook
LAN/MAC trivia: there were some number of gov/military types involved with XTP and some gov. agencies were directing elimination of Internet and move to (OSI) GOSIP. So (for them) XTP was taken to ISO chartered ANSI group responsible for OSI level 3&4 (network&transport) standards (X3S3.3) as "HSP". Eventually they said they couldn't do it because there was ISO directive that they could only standardize protocols that conformed to the OSI model. XTP didn't conform to OSI model because 1) it supported internetworking protocol (non-existent between level 3&4), 2) went directly to LAN MAC interface (non-existent somewhere in middle of OSI level 3) and 3) skipped the level 3/4 interface (directly from transport to LAN MAC).

Middle 80s, (IBM) communication group had been fighting off release of mainframe TCP/IP. When they lost, they changed their tactic and said that because they had corporate strategic ownership for everything that crossed datacenter walls, TCP/IP product had to be released through them. What shipped got aggregate 44kbytes/sec using nearly whole 3090 processor. I did the changes for RFC1044 and in some tuning tests at Cray Research between Cray and 4341, got sustained channel throughput using only modest amount of 4341 processor (something like 500 times improvement in bytes moved per instruction executed). The communication group also strongly opposed me being on (Chesson's) XTP technical advisory board.

XTP &/or HSP posts
https://www.garlic.com/~lynn/subnetwork.html#xtphsp
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet
RFC 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia
Date: 17 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#100 IBM Introduces 'Vela' Cloud AI Supercomputer Powered by Intel, Nvidia

IBM Builds An AI Supercomputer On The Cheap In Its Cloud
https://www.nextplatform.com/2023/02/17/ibm-builds-an-ai-supercomputer-on-the-cheap-in-its-cloud/
You could build a Vela machine of your own by shopping for second-hand servers, CPUs and GPUs, and switches out on eBay, and IBM says in a blog unveiling the machine that the components of the machine were chosen precisely do IBM Cloud could deploy clones of this system in any one of its dozens of datacenters around the world. And we would add, do so without having to worry about export controls given the relative vintage of the CPUs, GPUs, and switching involved.

... snip ...

megadatacenter posts
https://www.garlic.com/~lynn/submisc.html#megadatacenter

--
virtualization experience starting Jan1968, online at home since Mar1970

Survival of the Richest

From: Lynn Wheeler <lynn@garlic.com>
Subject: Survival of the Richest
Date: 18 Feb, 2023
Blog: Facebook
Survival of the Richest
https://www.ineteconomics.org/perspectives/podcasts/survival-of-the-richest
Global Inequality Report, which highlights the accelerating pace at which the world's billionaires have increased their wealth exponentially in recent years. They also discuss the ways in which governments can reverse this trend through taxation.

... snip ...

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
tax fraud, tax evasion, tax loopholes, tax avoidance, tax havens posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

Anti-Union Capitalism Is Wrecking America

From: Lynn Wheeler <lynn@garlic.com>
Subject: Anti-Union Capitalism Is Wrecking America
Date: 18 Feb, 2023
Blog: Facebook
Bernie Sanders: Anti-Union Capitalism Is Wrecking America. Workers deserve a better deal than the unfettered capitalism that is destroying our health, our democracy, and our planet.
https://www.thenation.com/article/society/bernie-sanders-angry-about-capitalism/

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
tax fraud, tax evasion, tax loopholes, tax avoidance, tax havens posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM CICS

From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM CICS
Date: 18 Feb, 2023
Blog: Facebook
Within a year of taking two credit hr intro to fortran/computers, univ. hired me fulltime responsible for OS/360. Then univ. library got an ONR grant to do online catalog ... some of the money went for 2321 datacell. Library online catalog was also selected to be beta test for original CICS product and debugging CICS was adding to my duties. One of the first was CICS wouldn't come up ... turns out CICS had some hard coded BDAM options ... that hadn't been documented and the library had built files with different set of options. some from Yelavich pages (gone 404, but lives on at wayback machine):
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/toc.htm
Bob Yelavich also in mainframe hall of fame
https://www.enterprisesystemsmedia.com/mainframehalloffame
... I'm just above him. other from
https://web.archive.org/web/20050409124902/http://www.yelavich.com/cicshist.htm
https://web.archive.org/web/20060325095613/http://www.yelavich.com/history/ev200401.htm
https://web.archive.org/web/20071124013919/http://www.yelavich.com/history/ev200402.htm
https://web.archive.org/web/20090107054344/http://www.yelavich.com/history/ev200402.htm

cics &/or bdam posts
https://www.garlic.com/~lynn/submain.html#cics

--
virtualization experience starting Jan1968, online at home since Mar1970

Early Webservers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Early Webservers
Date: 18 Feb, 2023
Blog: Facebook
first webserver in the US was on SLAC (CERN sister installation) mainframe vm system (SLAC also hosted the monthly mainframe user group meetings, "BAYBUNCH")
https://ahro.slac.stanford.edu/wwwslac-exhibit
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994
list on wikipedia
https://en.wikipedia.org/wiki/List_of_websites_founded_before_1995

related recent thread
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#83 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#84 Memories of Mosaic
https://www.garlic.com/~lynn/2023.html#85 Memories of Mosaic

other misc.
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
coworker at science center
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
was responsible for the IBM internal network (larger than arpanet/internet from just about beginning until sometime mid/late 80s), technology also used for the IBM corporate sponsored univ. BITNET (also larger than internet for a period)
https://en.wikipedia.org/wiki/BITNET
old email from IBMer responsible for setting up EARN (bitnet in europe):
https://www.garlic.com/~lynn/2001h.html#email840320

internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

other trivia: some of the MIT CTSS/7094 people
https://en.wikipedia.org/wiki/Compatible_Time-Sharing_System
had gone to project MAC on 5th flr, for multics
https://en.wikipedia.org/wiki/Multics
others went to the IBM science center on the 4th flr and did virtual machines (cp40/cms & cp67/cms, precursor to vm370),
https://en.wikipedia.org/wiki/CP/CMS
online & performance apps, internal corporate network (see zvm-50th-part-3 account), etc. CTSS RUNOFF
https://en.wikipedia.org/wiki/TYPSET_and_RUNOFF#CTSS
was redone as "script" for CMS. Then GML was invented at the science center in 1969 (letters chosen from 1st letters of inventors last name) and GML tag processing added to script.
https://web.archive.org/web/20230703135757/http://www.sgmlsource.com/history/sgmlhist.htm
after another decade, GML morphs into ISO standard SGML ... and then after another decade, SGML morphs into HTML at CERN.

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
gml/sgml/html posts
https://www.garlic.com/~lynn/submain.html#sgml

other recent posts mentioning SLAC &/or BAYBUNCH
https://www.garlic.com/~lynn/2023.html#70 GML, SGML, & HTML
https://www.garlic.com/~lynn/2023.html#55 z/VM 50th - Part 6, long winded zm story (before z/vm)
https://www.garlic.com/~lynn/2023.html#46 MTS & IBM 360/67
https://www.garlic.com/~lynn/2023.html#31 IBM Change
https://www.garlic.com/~lynn/2023.html#24 IBM Punch Cards
https://www.garlic.com/~lynn/2022g.html#91 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#60 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#59 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#58 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#56 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#54 Stanford SLAC (and BAYBUNCH)
https://www.garlic.com/~lynn/2022g.html#18 Early Internet
https://www.garlic.com/~lynn/2022f.html#69 360/67 & DUMPRX
https://www.garlic.com/~lynn/2022f.html#50 z/VM 50th - part 3
https://www.garlic.com/~lynn/2022f.html#49 z/VM 50th - part 2
https://www.garlic.com/~lynn/2022f.html#37 What's something from the early days of the Internet which younger generations may not know about?
https://www.garlic.com/~lynn/2022f.html#10 9 Mainframe Statistics That May Surprise You
https://www.garlic.com/~lynn/2022e.html#102 Mainframe Channel I/O
https://www.garlic.com/~lynn/2022e.html#9 VM/370 Going Away
https://www.garlic.com/~lynn/2022e.html#8 VM Workship ... VM/370 50th birthday
https://www.garlic.com/~lynn/2022d.html#72 WAIS. Z39.50
https://www.garlic.com/~lynn/2022d.html#56 CMS OS/360 Simulation
https://www.garlic.com/~lynn/2022c.html#108 TCMs & IBM Mainframe
https://www.garlic.com/~lynn/2022c.html#99 IBM Bookmaster, GML, SGML, HTML
https://www.garlic.com/~lynn/2022c.html#77 IBM Mainframe market was Re: Approximate reciprocals
https://www.garlic.com/~lynn/2022c.html#41 CMSBACK & VMFPLC
https://www.garlic.com/~lynn/2022c.html#8 Cloud Timesharing
https://www.garlic.com/~lynn/2022b.html#111 The Rise of DOS: How Microsoft Got the IBM PC OS Contract
https://www.garlic.com/~lynn/2022b.html#109 Attackers exploit fundamental flaw in the web's security to steal $2 million in cryptocurrency
https://www.garlic.com/~lynn/2022b.html#107 15 Examples of How Different Life Was Before The Internet
https://www.garlic.com/~lynn/2022b.html#67 David Boggs, Co-Inventor of Ethernet, Dies at 71
https://www.garlic.com/~lynn/2022b.html#28 Early Online
https://www.garlic.com/~lynn/2022.html#93 HSDT Pitches
https://www.garlic.com/~lynn/2022.html#89 165/168/3033 & 370 virtual memory
https://www.garlic.com/~lynn/2022.html#55 Precursor to current virtual machines and containers
https://www.garlic.com/~lynn/2022.html#36 Error Handling
https://www.garlic.com/~lynn/2022.html#3 GML/SGML/HTML/Mosaic

--
virtualization experience starting Jan1968, online at home since Mar1970

If Nothing Changes, Nothing Changes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: If Nothing Changes, Nothing Changes
Date: 19 Feb, 2023
Blog: Facebook
If Nothing Changes, Nothing Changes: The Nick Donofrio Story
https://www.amazon.com/If-Nothing-Changes-Donofrio-Story-ebook/dp/B0B178D91G/

... from a previous post
https://www.garlic.com/~lynn/2022f.html#57 The Man That Helped Change IBM

... The Man That Helped Change IBM
https://smallbiztrends.com/2022/08/the-man-that-helped-change-ibm.html
This week I celebrated my 700th episode of The Small Business Radio Show with Nicholas (Nick) Donofrio who began his career in 1964 at IBM. Ironically, I started at IBM in 1981 for the first 9 years of my career. Nick lasted a lot longer and remained there for 44 years. His leadership positions included division president for advanced workshops, general manager of the large-scale computing division, and executive vice president of innovation and technology. He has a new book about his career at IBM called "If Nothing Changes, Nothing Changes: The Nick Donofrio Story".

... snip ...

... and couple other posts
https://www.garlic.com/~lynn/2023.html#52 IBM Bureaucrats, Careerists, MBAs (and Empty Suits)
https://www.garlic.com/~lynn/2004d.html#15 "360 revolution" at computer history museuam (x-post)

... I was in all-hands Austin meeting where it was said that Austin had told IBM CEO that it was doing RS/6000 project for NYTimes to move their newspaper system (ATEX) off VAXCluster ... but it would be dire consequences for anybody to let it leak that it wasn't being done.

One day Nick stopped in Austin and all the local executives were out of town. My wife put together hand drawn charts and estimates for doing the NYTimes project for Nick ... and he approved it. Possibly contributed to offending so many people in Austin that suggested that we do the project in San Jose.

It started out as HA/6000, but I rename it HA/CMP (High Availability Cluster Multi-Processing) after starting doing technical/scientific cluster scale-up with national labs and commercial cluster scale-up with RDBMS vendors (who had VAXCluster support in the same source base with Unix support ... providing some APIs with VAXCluster semantics made it easier for port to HA/CMP). Within a couple weeks after Jan1992 cluster scale-up meeting with Oracle CEO (16way mid92, 128way ye92), cluster scale-up is transferred (to be announced as IBM supercomputer for technical/scientific *ONLY*) and we are told we can't work on anything with more than four processors. A few months later, we leave IBM.

... ref in this post&comments
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
others
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/
https://www.linkedin.com/pulse/memories-mosaic-lynn-wheeler/
https://www.linkedin.com/pulse/ibm-downfall-lynn-wheeler/

ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
801/risc, iliad, romp, rios, pc/rt, rs/6000, power, power/pc posts
https://www.garlic.com/~lynn/subtopic.html#801
ibm downfall, breakup, controlling market posts
https://www.garlic.com/~lynn/submisc.html#ibmdownfall

--
virtualization experience starting Jan1968, online at home since Mar1970

'Oligarchs run Russia. But guess what? They run the US as well'

From: Lynn Wheeler <lynn@garlic.com>
Subject: 'Oligarchs run Russia. But guess what? They run the US as well'.
Date: 19 Feb, 2023
Blog: Facebook
Bernie Sanders: 'Oligarchs run Russia. But guess what? They run the US as well'. The veteran senator is now part of Joe Biden's inner circle, and is still fighting his country's vast inequalities
https://www.theguardian.com/us-news/2023/feb/19/bernie-sanders-oligarchs-ok-angry-about-capitalism-interview
"Well, the fact is the Wall Street Journal is shocked - flabbergasted! - that an American president would have the courage to mention in his speech, say, that the oil industry made $200bn in profit, while jacking up prices for everyone; they are shocked to hear that a president wants to take on the greed of the pharmaceutical industry; shocked to hear a president talk about the need to raise teacher salaries. Joe Biden is far more conservative than I am. But to his credit, I think he has seen what the progressive movement is doing in this country. And he feels comfortable with some of our ideas - and I appreciate that."

... snip ...

inequality posts
https://www.garlic.com/~lynn/submisc.html#inequality
capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
tax fraud, tax evasion, tax loopholes, tax avoidance, tax havens posts
https://www.garlic.com/~lynn/submisc.html#tax.evasion

--
virtualization experience starting Jan1968, online at home since Mar1970

Early Webservers

Refed: **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Early Webservers
Date: 20 Feb, 2023
Blog: Facebook
re:
https://www.garlic.com/~lynn/2023.html#109 Early Webservers

really old history ... I didn't know about the agencies as undergraduate back in the 60s ... rewriting a lot of CP67 code (precursor to the VM370 at SLAC and the 1st webserver in the US) that science center would pick up and ship in the product.
https://www.linkedin.com/pulse/inventing-internet-lynn-wheeler/

After joining IBM, I would be asked to teach computer&security classes ... reference to long ago and far away (gone 404, but lives on at wayback machine)
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
also
https://www.linkedin.com/pulse/memories-mosaic-lynn-wheeler/
and
https://www.garlic.com/~lynn/2023.html#82 Memories of Mosaic

science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet
internet posts
https://www.garlic.com/~lynn/subnetwork.html#internet

--
virtualization experience starting Jan1968, online at home since Mar1970

After the storm, hopefully

Refed: **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: After the storm, hopefully
Newsgroups: alt.folklore.computers
Date: Mon, 20 Feb 2023 13:26:52 -1000
Alexander Schreiber <als@usenet.thangorodrim.de> writes:
I currently live in Switzerland. The place both has good regulations _and_ enforcement of same. Crap like PG&E literally waiting for a 100y old piece of steel to finally fail (and cause a huge forest fire) would _not_ fly around here. There is a very strong "We deliver quality. Period." spirit permeating the place, which I like a lot ;-)

... note in the past PG&E also got Public Utility Commission to approve higher rates to cover brush&fire remediation ... and then got fined for using it for dividends and executive bonuses ... after several fires ... always wondered if they also then got PUC approval for higher rates to cover the fines ... and also whether there were any clawbacks of the dividends and executive bonuses.

some past posts mentioning pg&e
https://www.garlic.com/~lynn/2018d.html#54 Report: Downed power lines sparked deadly California fires
https://www.garlic.com/~lynn/2013g.html#61 What Makes a bridge Bizarre?
https://www.garlic.com/~lynn/2012h.html#31 How do you feel about the fact that today India has more IBM employees than US?
https://www.garlic.com/~lynn/2011n.html#65 Soups

also go back to the 50s&60s in northeast railroad corridors where they were paying dividends, inflated executive compensation and bonuses (including using track maintenance funds). I remember commuting by rail in Bostin area in the 70s ... where there were 5mph speed limits and the old timers would talk about it having been 60mph. Also areas of track refeared to as freight car graveyards ... so many abandoned after repeated derailments (in contrast as a kid out west seeing rail maintenance crews coming through at least every other year).

other posts mentioning track maintenance
https://www.garlic.com/~lynn/2022c.html#88 How the Ukraine War - and COVID-19 - is Affecting Inflation and Supply Chains
https://www.garlic.com/~lynn/2016.html#1 I Feel Old
https://www.garlic.com/~lynn/2015b.html#42 Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2013g.html#83 What Makes travel Bizarre?
https://www.garlic.com/~lynn/2013f.html#61 The cloud is killing traditional hardware and software
https://www.garlic.com/~lynn/2010p.html#52 TCM's Moguls documentary series
https://www.garlic.com/~lynn/2009i.html#62 Urban transportation
https://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
https://www.garlic.com/~lynn/2007u.html#32 What do YOU call the # sign?
https://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#18 The Pankian Metaphor
https://www.garlic.com/~lynn/2004e.html#7 OT Global warming
https://www.garlic.com/~lynn/2003i.html#41 TGV in the USA?
https://www.garlic.com/~lynn/2002q.html#7 Big Brother -- Re: National IDs

... skimming infrastructure funds apparently is now deeply entrenched in the culture

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

--
virtualization experience starting Jan1968, online at home since Mar1970

After the storm, hopefully

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Re: After the storm, hopefully
Newsgroups: alt.folklore.computers
Date: Tue, 21 Feb 2023 07:23:52 -1000
Lynn Wheeler <lynn@garlic.com> writes:
... skimming infrastructure funds apparently is now deeply entrenched in the culture

re:
https://www.garlic.com/~lynn/2023.html#113 After the storm, hopefully

Rail Unions Warned Us: Greed is Dangerous
https://www.counterpunch.org/2023/02/21/rail-unions-warned-us-greed-is-dangerous/
In contract negotiations last year, they denounced a business model known as "precision scheduled railroading," which aims to boost profits by running bigger and faster trains with smaller crews. The practice has even earned a nickname among rail workers: "positive shareholder reaction." Combined with a lack of guaranteed sick pay, this created dangerous conditions for overworked rail employees.

... but didn't mention the track maintenance issue ... which has been around for at least decades ... and then going back to mid-1800s
http://phys.org/news/2012-01-railroad-hyperbole-echoes-dot-com-frenzy.html
https://www.amazon.com/Railroaded-Transcontinentals-Making-America-ebook/dp/B0051GST1U

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism

... some archived posts mentioning "railroaded"
https://www.garlic.com/~lynn/2019e.html#44 Corporations Are People
https://www.garlic.com/~lynn/2019c.html#75 Packard Bell/Apple
https://www.garlic.com/~lynn/2019c.html#43 How a Right-Wing Attack on Protections for Native American Children Could Upend Indian Law
https://www.garlic.com/~lynn/2019b.html#9 England: South Sea Bubble - The Sharp Mind of John Blunt
https://www.garlic.com/~lynn/2019b.html#81 China Retools Vast Global Building Push Criticized as Bloated and Predatory
https://www.garlic.com/~lynn/2019b.html#8 Corporations Are People' Is Built on an Incredible 19th-Century Lie
https://www.garlic.com/~lynn/2019b.html#71 IBM revenue has fallen for 20 quarters -- but it used to run its business very differently
https://www.garlic.com/~lynn/2019b.html#47 Union Pacific Announces 150th Anniversary Celebration Commemorating Transcontinental Railroad's Completion
https://www.garlic.com/~lynn/2019b.html#3 Corporations Are People' Is Built on an Incredible 19th-Century Lie
https://www.garlic.com/~lynn/2019b.html#19 Does Capitalism Kill Cooperation?
https://www.garlic.com/~lynn/2019.html#60 Grant (& Conkling)
https://www.garlic.com/~lynn/2018e.html#72 Top CEOs' compensation increased 17.6 percent in 2017
https://www.garlic.com/~lynn/2018c.html#52 We the Corporations: How American Businesses Won Their Civil Rights
https://www.garlic.com/~lynn/2015b.html#42 Future of support for telephone rotary dial ?
https://www.garlic.com/~lynn/2014m.html#39 LEO
https://www.garlic.com/~lynn/2014m.html#37 Income Inequality
https://www.garlic.com/~lynn/2014b.html#73 Royal Pardon For Turing
https://www.garlic.com/~lynn/2012i.html#1 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012g.html#76 Monopoly/ Cartons of Punch Cards
https://www.garlic.com/~lynn/2012.html#62 Railroaded
https://www.garlic.com/~lynn/2012.html#57 The Myth of Work-Life Balance

--
virtualization experience starting Jan1968, online at home since Mar1970

Years Before East Palestine Disaster, Congressional Allies of the Rail Industry Intervened to Block Safety Regulations

From: Lynn Wheeler <lynn@garlic.com>
Subject: Years Before East Palestine Disaster, Congressional Allies of the Rail Industry Intervened to Block Safety Regulations
Date: 22 Feb, 2023
Blog: Facebook
recent "railroaded" posts
https://www.garlic.com/~lynn/2023.html#113 After the storm, hopefully
https://www.garlic.com/~lynn/2023.html#114 After the storm, hopefully

Years Before East Palestine Disaster, Congressional Allies of the Rail Industry Intervened to Block Safety Regulations. Records show an all-out push to delay and repeal train safety regulations.
https://theintercept.com/2023/02/21/east-palestine-rail-safety-congress/
In 2015 alone, Norfolk Southern retained 47 federal lobbyists and focused on fighting against ECP regulation. The company disclosed that it "opposed additional speed limitations and requiring ECP brakes." Other rail giants, including BNSF and CSX, deployed lobbyists on the regulations as well, records show.

capitalism posts
https://www.garlic.com/~lynn/submisc.html#capitalism
stock buyback posts
https://www.garlic.com/~lynn/submisc.html##stock.buyback

--
virtualization experience starting Jan1968, online at home since Mar1970

The Bunker: Tarnished Silver Bullets

From: Lynn Wheeler <lynn@garlic.com>
Subject: The Bunker: Tarnished Silver Bullets
Date: 22 Feb, 2023
Blog: Facebook
The Bunker: Tarnished Silver Bullets. This week in The Bunker: the Pentagon's preoccupation with fools-gold-plated warplanes means we end up with costly weapons that can't fly as often; Navy recruiting woes; a grim invasion anniversary; and more.
https://www.pogo.org/analysis/2023/02/the-bunker-tarnished-silver-bullets

The Pentagon Labyrinth
https://www.pogo.org/podcasts/pentagon-labyrinth
http://chuckspinney.blogspot.com/p/pentagon-labyrinth.html
http://dnipogo.org/labyrinth/

other recent posts mentioning pentagon labyrinth
https://www.garlic.com/~lynn/2023.html#75 The Pentagon Saw a Warship Boondoggle
https://www.garlic.com/~lynn/2021j.html#73 "The Spoils of War": How Profits Rather Than Empire Define Success for the Pentagon
https://www.garlic.com/~lynn/2018e.html#83 The Pentagon's New Stealth Bookkeeping
https://www.garlic.com/~lynn/2017j.html#2 WW II cryptography
https://www.garlic.com/~lynn/2017i.html#64 The World America Made
https://www.garlic.com/~lynn/2017i.html#14 How to spot a dodgy company - never trust a high achiever
https://www.garlic.com/~lynn/2017h.html#109 Iraq, Longest War
https://www.garlic.com/~lynn/2017b.html#60 Why Does Congress Accept Perpetual Wars?
https://www.garlic.com/~lynn/2016.html#88 The Pentagon's Pricey Culture of Mediocrity
https://www.garlic.com/~lynn/2016.html#48 Thanks Obama

success of failure article (strategy to make more money from series of failures)
http://www.govexec.com/excellence/management-matters/2007/04/the-success-of-failure/24107/

past posts mentioning both "pentagon labyrinth" and success of failure culture
https://www.garlic.com/~lynn/2015h.html#70 Department of Defense Head Ashton Carter Enlists Silicon Valley to Transform the Military
https://www.garlic.com/~lynn/2013d.html#54 What Makes an Architecture Bizarre?
https://www.garlic.com/~lynn/2012d.html#60 Memory versus processor speed
https://www.garlic.com/~lynn/2011l.html#34 Scotland, was Re: Solving the Floating-Point Goldilocks Problem!
https://www.garlic.com/~lynn/2011l.html#0 Justifying application of Boyd to a project manager

military-industrial(-congressional) complex
https://www.garlic.com/~lynn/submisc.html#military.industrial.complex
perpetual war
https://www.garlic.com/~lynn/submisc.html#perpetual.war
capitalism
https://www.garlic.com/~lynn/submisc.html#capitalism
success of failure
https://www.garlic.com/~lynn/submisc.html#success.of.failure

--
virtualization experience starting Jan1968, online at home since Mar1970

IBM 5100

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: IBM 5100
Date: 23 Feb, 2023
Blog: Facebook
ACIS started out in the early 80s ... I think Danbury ... originally included something like $200M to give out in grants to univ ... other projects also ... involved in corporate sponsored univ. network BITNET:
https://en.wikipedia.org/wiki/BITNET
... co-worker at the (cambridge) science center was responsible for the internal network, technology also used for bitnet.
https://en.wikipedia.org/wiki/Edson_Hendricks
https://www.linkedin.com/pulse/zvm-50th-part-3-lynn-wheeler/
science centers started in mid-60s (nearly 20yrs before) ... cambridge, houston, palo alto, Los Angeles, grenoble, pisa, etc. Early/mid 70s, 5100 done at palo alto science center (not los gatos science center, Los Gatos was ASDD location and then morphed into VLSI center)
https://en.wikipedia.org/wiki/IBM_5100
Palo Alto also did the APL microcode assist for 370/145, claims that it ran (some) apl as fast as 370/168

Note after unbundlining announcement, HONE was originally created for SEs to have online practice with guest operating systems running on CP/67 (precursor to vm370, done at cambridge science center). Cambridge also did port of APL\360 to CP67/CMS for CMS\APL. HONE then also started deploying CMS\APL-based online sales&marketing tools ... and the use for guest operating system just dwindled away. Early mid-70s, the US HONE datacenters were consolidated in Palo Alto across the back parking lot from Palo Alto Science Center ... which had morphed CMS\APL into APL\CMS for VM370/CMS

canbridge science center posts
https://www.garlic.com/~lynn/subtopic.html#545tech
internal network posts
https://www.garlic.com/~lynn/subnetwork.html#internalnet
bitnet (& EARN) posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet
HONE (& APL) posts
https://www.garlic.com/~lynn/subtopic.html#hone
unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle

--
virtualization experience starting Jan1968, online at home since Mar1970

Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@garlic.com>
Subject: Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
Date: 23 Feb, 2023
Blog: Facebook
Google Tells Some Employees to Share Desks After Pushing Return-to-Office Plan
https://gizmodo.com/google-cloud-return-to-office-remote-work-big-tech-1850149200

I took 2 semester hr intro to fortran/computers. At the end of the semester got job redoing 1401 MPIO to 360/30 (unit record front end for 709 tape->tape) ... univ. shutdown datacenter on weekends and I had the whole place dedicated for 48hrs straight (although 48hrs w/o sleep made monday classes hard). Within year of taking intro course, I'm hired fulltime responsible for OS/360 (univ. had been sold 360/67 for TSS/360 ... which never came to production so ran as 360/65 with os/360). I get a 2741 terminal next to desk in my office. Then before I graduate, I'm hired fulltime into a small group in the Boeing CFO office to help with the formation of Boeing Computer Services (consolidate all dataprocessing into an independent business unit to better monetize the investment, including offering services to non-Boeing entities). I thot Renton datacenter was possibly largest in the world, 360/65s were arriving faster than they could installed, boxes constantly staged in hallways around machine room. Lots of politics between head of Renton datacenter and CFO ... who only had a 360/30 up at Boeing field for payroll, although they enlarge it to install 360/67 for me to play with (when I'm not doing other stuff). Then when I graduate, I join the science center in Cambridge (instead of staying at Boeing).

Late 70s, at san jose research and friday's after work, we frequently discussed how to get the (mostly) IBM computer illiterate to use computers. Then there is rapidly spreading rumor that members of the corporate executive committee were using email to communicate ... and start seeing managers rerouting 3270 terminal order to their desks (back when 3270 were part of annual budget and required VP level sign-off) ... which typically were powered on in the morning and went through the day unused, just burning PROFs menu into the screen (saw this symptom continue through the early 90s, when PS2/486 with large displays ... redirected to manager desks as a form of status symbol and implying some computer literacy). Along the way, there was business case that showed 3yr amortized 3270 was about same monthly as business phone that appeared standard on all desks.

from (silicon valley) "real programmers"
Real Programmers never work 9 to 5. If any real programmers are around at 9am, it's because they were up all night.

... snip ...

... for the 10% that do 90% of the work, they have to concentrate on complex issues and interruptions destroy that concentration (they also are proficient in programming languages, analogous to natural language proficiency, think&dream in the language w/o needing translation) ... also applies to cubicles & "open offices"

Google got it wrong. The open-office trend is destroying the workplace.
https://www.washingtonpost.com/posteverything/wp/2014/12/30/google-got-it-wrong-the-open-office-trend-is-destroying-the-workplace/

some past archived posts that mention "open office" and/or real programmers never work 9 to 5
https://www.garlic.com/~lynn/2023.html#61 Software Process
https://www.garlic.com/~lynn/2022h.html#90 Psychology of Computer Programming
https://www.garlic.com/~lynn/2022d.html#28 Remote Work
https://www.garlic.com/~lynn/2021d.html#49 Real Programmers and interruptions
https://www.garlic.com/~lynn/2021c.html#78 Air Force opens first Montessori Officer Training School
https://www.garlic.com/~lynn/2019.html#6 Fwd: It's Official: Open-Plan Offices Are Now the Dumbest Management Fad of All Time | Inc.com
https://www.garlic.com/~lynn/2018b.html#56 Computer science hot major in college (article)
https://www.garlic.com/~lynn/2017i.html#53 When Working From Home Doesn't Work
https://www.garlic.com/~lynn/2017h.html#27 OFF TOPIC: University of California, Irvine, revokes 500 admissions
https://www.garlic.com/~lynn/2016f.html#19 And it's gone --The true cost of interruptions
https://www.garlic.com/~lynn/2016d.html#72 Five Outdated Leadership Ideas That Need To Die
https://www.garlic.com/~lynn/2015b.html#24 What were the complaints of binary code programmers that not accept Assembly?
https://www.garlic.com/~lynn/2014.html#24 Scary Sysprogs and educating those 'kids'
https://www.garlic.com/~lynn/2014.html#23 Scary Sysprogs and educating those 'kids'
https://www.garlic.com/~lynn/2013m.html#16 Work long hours (Was Re: Pissing contest(s))
https://www.garlic.com/~lynn/2012d.html#21 Inventor of e-mail honored by Smithsonian
https://www.garlic.com/~lynn/2011e.html#22 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011e.html#20 Multiple Virtual Memory
https://www.garlic.com/~lynn/2011b.html#17 Rare Apple I computer sells for $216,000 in London
https://www.garlic.com/~lynn/2007v.html#36 folklore indeed
https://www.garlic.com/~lynn/2004q.html#0 Single User: Password or Certificate
https://www.garlic.com/~lynn/2002e.html#39 Why Use *-* ?
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:

--
virtualization experience starting Jan1968, online at home since Mar1970


--
previous, next, index - home